Why the CMIP5 data on Raijin has such a complicated directory structure?


The CMIP5 dataset is overall a Pb sized dataset contributed by as many as 60 different modelling groups, hence it is inherently complicated to organise. As well because the modelling groups didn’t stick to the rules and did their own thing. Unfortunately, the guidelines in regard to versioning the dataset were not sufficiently detailed and so they've been interpreted differently by different groups. When the climate community started downloading data on Raijin it was decided that the only way to keep track of the dataset "version" was to re-create their DRS as in the web server (thredds) which are unique. We also always download the originally published dataset and no replicas from other nodes.
When a web server reaches its capacity then the new datasets are published from a new server and this means that you could have a different root for the same model, sometimes for the same experiment.
Currently NCI is re-downloading the latest versions of CMIP5 non-Australian data into a more coherent directory structure. the new replicated data is stored in the al33 group. Refer to their climate community page for information and updates.

Do CMIP5 variables coming from the same simulations have the same version number?


This question is difficult to answer and it is a really sore point with CMIP5, hopefully, they are implementing changes so it won't happen again in CMIP6.
There's no way to be 100% sure that two versions come out of the same simulation. The versioning instructions where quite unclear and interpreted differently by different modelling groups. In the last couple of years, it occurred to few groups (for example GFDL) to add a "simulation_id" to their attributes but this is the exception rather than the norm. I'd like to assume that same version means the same simulation, but having a different version really just means that the group of variables has been published later, or maybe that one of them was calculated wrongly in the post-processing and has been re-published under a new version.
A completely different simulation with a different configuration, initialization etc should have a different ensemble code so anything with r1i1p1, for example, should come from the same run, even if part of it or its post-processing might have been updated.
More information might be available directly from the modelling groups.
Another cautionary approach could be if you find a few of the variables you need which have a more recent version, you can send an e-mail to climate_help and we will check if everything you need it is up to date. Users request only what they need and it's very well possible that someone updated just part of an ensemble.

Using the ARCCSS DMPonline is not useful for me as I don't use NCI servers.



Using DMPonline is really independent of NCI, while it contains some specific information on NCI systems since they are common to many in the Centre, it is more about data management in general, regardless which hardware you use. Even if you are using your laptop it is good to have a data workflow, a plan of what you will be doing at different stages of your research. For example, to publish an RDA record for your data with us, you would now fill one of these plans, and the advantage would be that you can easily export that as a document which you can always adapt and re-use.

DMP will be compulsory for universities and ARC grants, publishing your data is already compulsory for most journals. Plus, at CMS we really want to hear from users that are not using NCI, users we don't normally hear from. So we get a better idea of what everybody in the Centre is doing and we can create new training resources or support all our users in a better way.