EPhysics Portal


Aus-VO is funded this year (and hopefully next) to start work on the inclusion of astrophysics theory in the Aus-VO project. It's very flexible what we actually do, provided it leads to us learning about good and bad ways to approach the problem.

We are writing a PaperOnEPhysicsPortal for submission to the proceedings of the Grid 2004 workshop, in Pittsburgh in November.


We decided that rather than just picking out some small set of freely available, generically useful N-body simulation codes and making some portal for users to run them, we would actually do the work in the context of some science we are genuinely interested in at Melbourne. We chose to to investigate what was available in the magneto-hydrodynamic realm in terms of numeric codes which might be appied to a number of problems that the neutron star research group here is interested in.


The one classic MHD code is called Zeus. I estimate it is cited in one or other of its forms in over 50 refereed papers. It exists as Zeus-2D, Zeus-3D and Zeus-MP, the last one being an MPI version of the 3D code. You can find all sorts of web pages on the various versions by plugging "Zeus MHD" into Google. Another code we might look at is XOOPIC - not much in the way of docs for this, but one of the code authors will visit us for Q4 2003 so we'll know more then.

Application areas

In discussions with Andrew Melatos, the PI of the neutron star group, we decided that Zeus-MP could be applied to a number of important, unsolved aspects of these highly magnetic and complex systems immediately. These include but are not limited to: the shear of magnetically thin and thick plasmas / fluids flowing past each other and the generation and number density of vortices as a function of size; the "splashdown" of heavy hot gas in a gravitational potential onto a thin, magnetic sheet; and others which I can't recall this instant.

VO issues

We intend to do the above problems, but build some prototype portal which provides a runtime environment for sweeping parameters through pre-configured simulation codes, archives and describes the results with metadata, and provides some on-line visualisation and analysis tools to apply to the results. Initially the "MHD Portal" would be for our scientific use, but ultimately, resources permitting, we might one day open it up for general use.


A website for the user to :

  1. configure, start and monitor ZeusMP job on the Grid.
  2. access and analyse output data


The user can:

  1. Get an account on request (email us)
  2. Login with a username and password (portal specific)
  3. Upload one or more grid certificates.
  4. Authenticate at least one certicate (via https & passphrase). Choose one cert to use.
  5. Create a new experiment
  6. Select the Zeus model (jet, etc)
  7. Set parameters/sweep ranges specific to choosen model.
  8. Press go!
  9. We keep one list of all resources. For a user's experiment, either create sub-list wrt user's certificate, or just try to use all resources and hope for graceful failure.
  10. Monitor coarse experiment status. eg done, pending, failed. This is per individual job. Webpage or applet.
  11. Stop experiment and system tidies up.
  12. Inspect stdout & stderr of failed jobs if they want to
  13. Cannot click 'try again' for individual jobs, nor cancel individual ones.
  14. Results are sent back to portal and stored in dir. structure (USERS->BRETT->JOB_N) with job description file, input files and parameter set file. Also a VOTable with metadata of results.
  15. See list of above files and fetch individually or all (.tar.gz). We do NOT use htaccess.
  16. User can select one or more files for summary, basic analyse and/or display. HDF 3d files supported here.

We note that most steps are generic. Only spec 6 and 7 are zeus specific. Spec 16 is HDF specific.

  • PortalTaskList - our current set of jobs to get us on our way to a simple, generic portal for Astro and Particle Physics.


We use a number of existing components to quickly build our portal. Much of the work is connecting various pieces.


  • Gridsphere : portlet framework (see below)
  • GridBrokerServer (brett) : runnning in a seperate JVM, provides access to FarmingEngines even if Tomcat is stopped.
  • Nimrod / GridBroker : provide plan file parsing and experiment running and monitoring APIs. We allow for both of these by using the Adaptor pattern (aka thin wrapper) so our code doesn't directly see them.
  • MyProxy : a storage server for user credentials
  • GridPortlets: associated with Gridsphere, these portlets provide generic Grid utilities such as credential uploading.

We connect all these bit together thus: Portal component interaction:

From the above diagram, a typical scenario might be:

  • User logins into WebServer from User Machine 2
  • If user hasn't already, she uses GridPortlets to upload/manage her Grid credentials to MyProxy Server.
  • (The user can also use any other machine (such as User Machine 1) to upload credentials using existing MyProxy tools.)
  • User selects and configures an experiment to run
  • Ephysics Portlets take and process the user's input.
  • Ephysics Services talk via RMI to GridAccess machine, running the BrokerServer.
  • BrokerServer submits GRAM job to cluster
  • GRAM job (based on a Nimrod plan file) executes.
  • Files are stored in GridStore, directly from Grid Cluster.
  • User monitors job (via Portal). She sees stderr, stdout and status.
  • User browses files (via a generic FileManager portlet)
  • User visualises files, via a specific portlet.


There is a portlet for each major step the user does:

  • LoginPortlet (gridsphere) : user logins in to GridSphere with username and password. This is indepentant of NIS, Grid Certificates, etc.
  • FileManagerPortlet (gridsphere) : user uploads a file - in this case, a plan file. This existing portlet also allows users to edit their files.
  • FileBrowserPortlet (???) : give a directory, provides a browsing and download access to tree. Different from FileManagerPortlet which is for a specific directory and is a 'flat' structure.
  • AuthPortlet (gridportlets) : communicates with MyProxy and provides other portlets with a valid proxy. It provides a 'default' certificate, which we use. Access to this proxy is via the SecurityService.
  • ExpSubmissionPortlet (brett) : gets the proxy from AuthPortlet and the plan file from FileManagerPortlet. Communicates with GridBrokerServer to submit the experiment. FileManager portlet allows the user to view and edit their uploaded plan file so we don't need to do that here. Can communicate with either NimrodG or GridBrokerServer via a different implementations of the ExpService.
  • MonitorPortlet (brett) : polls GridBrokerServer (or Nimrod, via thin wrapper) and displays state of jobs. Displays this stderr/stdout as required.
  • VisualiserPortlet (???) : takes resultant data files and visualises them.
  • ZeusPlanFilePortlet (???) : creates a plan file from a user interface. Each application has it's own plan file maker portlet.[MyProxy]

Other points

  • Multiple experiments : if one user can have many experiments (many GridbusFarmingEngines) the portlets JobMonitor, Results, ExpSubmission will all need to have a 'current experiment' concept. This could be done via Portlet Messages (see above). The choice of 'active' experiement could be a seperate portlet? All portlets need to agree on the active experiment or it would be confusing for the user. This is done via the ExperimentService which maintains a user's current experiment variable.


We use GridSphere to contain and control a number of independant portlets. Clearly, portlets need to communicate but we don't want to couple them too closely. Currently I know of a few different ways to communicate.

  • SportletContext : persistant across sessions, but not across logins. Stores objects in memory.
  • PortletData : (checked) store simple Strings, persistent across logins and between startdowns, so it's file-based.
  • PersistenceManagerRdbms : stores objects serialized in a database.
  • Portlet Messages : not yet part of JSR168, but works in Gridsphere. Send a message to update, etc from one portlet to another, or to many. This is pretty useful!


  • experiment - a collection of jobs, as defined by one plan file. eg. 'my chemistry experiment which sweeps parameters space'
  • job - a single job is submitted to GRAM. It is a collection of tasks. eg 'run this program with parameters 10,10,20'
  • task - a set of unix-style commands. 'cd tmp, copy chem.exe, ./chem.exe, copy results'

-- DavidBarnes - 18 Sep 2003

-- BrettBeeson - 12 Dec 2003

-- BrettBeeson - 27 Feb 2004