Direkt zum Inhalt | Direkt zur Navigation

Personal tools

You are here: Home » FAQ & Forum » OASIS3 FAQ


  • General
    • Q: Where and how can I get the OASIS3 sources?
      • A: Please get all download details here.
    • Q: Who is using OASIS3? 
    • Q: I compiled and ran toy example toyoasis3 but it aborts before the end of the coupled run; what should I do? 
      • A: Go in the working directory and have a look at the following files:
        • cplout (OASIS general log file)
        • Oasis.prt (OASIS communication log file)
        • oceout*/atmout*, cheout* (oceoa3/atmoa3/cheoa3 general log files)
        • toyoce.prt*/toyatm.prt*/toyche.prt* (oceoa3/atmoa3/cheoa3 communication log files)
    • Q: I compiled and ran toy example toyoasis3 but it aborts when reading the bt42orca file?
      • A: The bt42orca file is a binary file that contains the weights and addresses for the MOZAIC transformation applied on the CORUNOFF field. If the bt42orca file is present in the working directory, it could be an endianness (http://en.wikipedia.org/wiki/Endianness) problem. You may have to compile oasis with an option that will change the endianness in your computing platform.
  • Interpolations and Transformations
    • Q: When I use MASK and EXTRAP, OASIS3 crashes with message "Stop in extrap: no progress in iteration"; what should I do?
      • A: This may depend on the number of neighbours you specify in the namcouple for the EXTRAP operation. Start by trying with 1 neighbour i.e. putting e.g. "NINENN  1   1   1" in the namcouple. For a higher number of neighbours, the extrapolation may simply be impossible given your grid and your mask. But more importantly, you probably should not be using MASK and EXTRAP at all. In the current OASIS3 version, a value based on non masked source grid points can assigned to all non masked target points (see the different options for the different interpolations in the User Guide).  
  • Process Management
    • Q: Is it possible to couple a component that is MPI parallelized with a component that is MPI_OpenMP parallelized?
    • A: There is in principle no problem for OASIS3 to couple such configuration. But you have to be careful about the way the send/receive of the coupling fields are implemented. In the OpenMP parallelized component, you have to make sure that only the master OpenMP thread communicates with OASIS3; for that component, you have to declare in the namcouple the number of MPI processes. The only problem you can encounter is with the operating system of your platform. The problem is to affect threads OpenMP and MPI process on allocated cores of the nodes and to have a different OMP_NUM_THREADS for each executable. The method we are using at IDRIS is to use the "task_geometry" functionality (this functionality wasn't installed on the machine and we had to ask IDRIS to install it). Here is an generic example to launch such a hybrid configuration with nemo being parallelized on 30 MPI processes and wam being parallelized on one MPI process with 32 OpenMP threads : 
    • ....
      # batch request
      # @ 
      #to launch
      poe -pgmmodel mpmd -cmdfile ./runfile
      with a run_file which could be something as follows :
      env OMP_NUM_THREADS=1 ./oasis3
      env OMP_NUM_THREADS=1 ./nemo
      env OMP_NUM_THREADS=1 ./nemo
      env OMP_NUM_THREADS=1 ./nemo
      env OMP_NUM_THREADS=32 ./wam
    • Q: How to use NEMO and its XMLIO server in coupled mode with OASIS3?
    • A:There are 2 options: 
      •  1) ioserver in connected mode (using_server = .FALSE. and using_oasis = .TRUE. in xmlio_server.def file). It is the "standard mode" with a standard namcouple as coupling configuration file. 
      •  2) ioserver in server mode (using_server = .TRUE. and using_oasis = .TRUE. in xmlio_server.def file). For that, it is needed to have a version of Oasis3 after r2709 because the module mod_prism_get_comm is needed to compile and to run NEMO in such a coupling mode. It is also needed to compile XMLIO with the cpp key "USE_OASIS". To run, you have to declare processes used to run the ioserver in the namcouple file as follows : 
          MPI1 25 25 ---> atm model on 25 processes
          5 5 ---> NEMO model on 5 processes 
          1 0 ---> ioserver on 1 process !
        You have also to be careful of the name of the models you put into the namcouple file : 
          3 lmdz.x oceanx ionemo
        These names have to be the same in the xmlio_server.def : client_id = 'oceanx' server_id = 'ionemo'
© Copyright ENES Portal 2011