casa  $Rev:20696$
 All Classes Namespaces Files Functions Variables
Public Member Functions | Static Public Member Functions | Private Attributes | Static Private Attributes
simple_cluster.simple_cluster Class Reference

List of all members.

Public Member Functions

def __init__
def shell
def uniqueIP
def isClusterRunning
def config_cluster
 cluster verifiction
def check_host_resources
def validate_hosts
def create_project
 project management
def do_project
def erase_project
def clear_project
def list_project
def erase_projects
def list_projects
def reset_project
def get_hosts
def cold_start
 cluster management
def stop_cluster
def stop_nodes
def start_cluster
def get_host
def get_engine_store
def start_logger
 log management
def start_resource
 resource management
def update_resource
def stop_resource
def show_resource
def check_resource
def get_return_list
def check_job
 execution status management
def check_status
def start_monitor
def stop_monitor
def show_queue
def get_status
def get_jobId
def remove_record
def make_call
 job distribution functions
def do_and_record
def list_result
 result processing functions
def get_result
def erase_result
def get_output
def getVariables
def use_paths
 engine selection functions
def use_hosts
def use_engines
def get_msname
 ms knowledge functions
def get_antenna_diam
def get_mean_reff
def get_spw_reff
def get_spw_chan
def get_pol_corr
def get_num_field
def get_field_name
def get_num_spw
def get_num_desc
def get_spw_id
def get_pol_id
def get_field_desc
def init_cluster
 setup -
def simple_clean
 example to distribute clean task over engines
def simple_split

Static Public Member Functions

def getCluster
 Static method that returns whatever the current definition of a cluster is.
def setDefaults

Private Attributes

 _project
 _hosts
 _jobs
 _rsrc
 _job_title
 _monitor_on
 _monitor_running
 _monitoringFile
 _verbose
 _resource_on
 _resource_running
 _configdone
 jagonzal (CAS-4276): New cluster specification file ###
 _cluster
 _JobQueueManager
 _enginesRWoffsets
 __localCluster
 __running

Static Private Attributes

int __default_mem_per_engine
float __default_mem_fraction
float __default_cpu_fraction

Detailed Description

The simple_cluster creates and maintains an ipcluster environment
for controlling parallel execution of casa tasks (tools and scripts)

Definition at line 14 of file simple_cluster.py.


Constructor & Destructor Documentation

def simple_cluster.simple_cluster.__init__ (   self,
  monitoringFile = 'monitoring.log',
  verbose = False 
)

Definition at line 24 of file simple_cluster.py.


Member Function Documentation

jagonzal (CAS-4276 - New cluster specification file): Retrieve resources available
at target node in order to dynamically deploy the engines to fit the idle capacity

Definition at line 309 of file simple_cluster.py.

References simple_cluster.simple_cluster.shell().

Referenced by simple_cluster.simple_cluster.config_cluster().

execution status management

Check the execution status of current noblock jobs  on all engines.

This function can be used to block the terminal until all submitted
jobs finish. 

Example:
CASA <2>: from simple_cluster import simple_cluster
CASA <3>: sl=simple_cluster()
CASA <4>: sl.init_cluster("my_cluster", "csplit")
CASA <5>: sl.simple_split('/lustre/casa-store/hye/10B-209a_5s.ms/', '')
CASA <6>: sl.check_job()

Definition at line 1271 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone.

def simple_cluster.simple_cluster.check_resource (   self,
  verbose_local = False 
)
jagonzal 29/05/12 (CAS-4137) - HPC project (CAS-4106)
========================================================================
Advanced monitoring function that provides CPU,Memory,I/O and job queue
stats per node and total per host. 

There are 2 usage modes:
- Called from the monitoring thread in regular time intervals 
  (i.e. within the check_status method), to only dump the stats 
  into a file or the terminal.
- Called from the command line, to return a dictionary in addition 
  of dumping the stats into a file or into the terminal.

In both cases logging can be controlled in the following way:
- User can provide a file-name when creating the simple_cluster 
  object, trough the string parameter "monitoringFile", to specify 
  the location of the monitoring file, otherwise it defaults to 
  'monitoring.log' in the working directory. 
- User can also switch verbose mode when creating the simple_cluster 
  object, trough the boolean parameter "verbose", to have the monitoring 
  info dumped into the terminal, otherwise it defaults to False and it 
  only dumps into the monitoring file. 
- User can even use the stand alone method show_state specifying 
  verbosity only for that particular call via the "verbose_local" 
  parameter, and the method returns a dictionary with all the stats 
  per node and total per host. 

Examples:
- Stand-Alone usage from terminal to return a dictionary
  from simple_cluster import *
  sc = simple_cluster.getCluster()
  stats = sc.show_state(False)
- Stand-Alone usage from terminal to print the stats into the terminal
  from simple_cluster import *
  sc = simple_cluster.getCluster()
  sc.show_state(True)
- Service-Mode usage to specify a custom file for dumping the stats
  from simple_cluster import *
  sc = simple_cluster('mycustomfile.log')
  sc.init_cluster('cluster-config.txt','test-rhel')
- Service-Mode usage to specify a custom file for dumping the stats
  and verbose mode to additionally dump the stats into the terminal
  from simple_cluster import *
  sc = simple_cluster('mycustomfile.log',True)
  sc.init_cluster('cluster-config.txt','test-rhel')

Definition at line 975 of file simple_cluster.py.

References simple_cluster.simple_cluster._enginesRWoffsets, simple_cluster.simple_cluster._jobs, simple_cluster.simple_cluster._monitoringFile, simple_cluster.simple_cluster._rsrc, simple_cluster.simple_cluster._verbose, backend_demo.pop, and simple_cluster.simple_cluster.shell().

Referenced by simple_cluster.simple_cluster.check_status(), simple_cluster.simple_cluster.show_resource(), and simple_cluster.simple_cluster.update_resource().

def simple_cluster.simple_cluster.check_status (   self,
  notify = False 
)
Check the execution status of submitted no-block jobs

Keyword arguments:
notify -- whether or not to display detailed resource usage info

Normally, one does not call this function directly. The start_monitor
will call this function internally.

Definition at line 1296 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._jobs, simple_cluster.simple_cluster._monitor_on, simple_cluster.simple_cluster._monitor_running, simple_cluster.simple_cluster._project, simple_cluster.simple_cluster.check_resource(), tests.test_imcontsub.list, and simple_cluster.simple_cluster.remove_record().

Referenced by simple_cluster.simple_cluster.start_monitor().

def simple_cluster.simple_cluster.clear_project (   self,
  proj 
)
Remove all previous results of the proj

Keyword arguments:
proj -- the name of project.

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
CASA <27>: sl.list_project('my_project')
host: casa-dev-07 ------------------------>>>>
casapy-20101122165601-5.log  sim.alma.csv.mid-f0-s0-b640-e768.flux
host: casa-dev-08 ------------------------>>>>
casapy-20101122165601-6.log  sim.alma.csv.mid-f0-s0-b768-e896.flux
host: casa-dev-10 ------------------------>>>>
casapy-20101122165601-7.log  sim.alma.csv.mid-f0-s0-b320-e640.flux

CASA <28>: sl.clear_project('my_project')

CASA <29>: sl.list_project('my_project')
host: casa-dev-07 ------------------------>>>>
host: casa-dev-08 ------------------------>>>>
host: casa-dev-10 ------------------------>>>>

Definition at line 602 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._hosts.

Referenced by simple_cluster.simple_cluster.reset_project().

cluster management

kill all engines on all hosts. Shutdown current cluster.

This is used if a complete restart of the cluster is needed. One can
rerun init_cluster after this. This also kills possible leftover
engines from previous sessions.

Definition at line 768 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, simple_cluster.simple_cluster.shell(), parallel_go.cluster.stop_cluster(), and simple_cluster.simple_cluster.stop_cluster().

def simple_cluster.simple_cluster.config_cluster (   self,
  cfg,
  force = False 
)

cluster verifiction

Read the configuration file and validate cluster definitions.  

Keyword arguments:
cfg -- the name of cluster configuration file
force -- whether or not to reconfigure if a configured cluster exists

A configuration file is an ASCII text file. Each line defines a node
(also called host) with one line per host to be used, and the following 
format: 

- <hostname>, <number of engines>, <work directory>
- <hostname>, <number of engines>, <work directory>, <fraction of total RAM>
- <hostname>, <number of engines>, <work directory>, <fraction of total RAM>, <RAM per engine>

where the interpretation of the parameters is as follows: 

- hostname: Hostname of the target node where the cluster is deployed 

  NOTE: The hostname has to be provided w/o quotes

- number of engines: Supports in turns 3 different formats 

    * If provided as an integer >1: It is interpreted as 
      the actual user-specified maximum number of engines
      
    * If provided as an integer =0: It will deploy as maximum 
      engines as possible, according to the idle CPU capacity 
      available at the target node
      
    * If provided as a float between 0 and 1: It is interpreted 
      as the percentage of idle CPU capacity that the cluster 
      can use in total at the target node

- work directory: Area in which the cluster will put intermediate 
  files such as log files, configuration files, and monitoring files

  NOTE1: This area has to be accessible from the controller (user) machine, 
 and mounted in the same path of the filesystem 
 
  NOTE2: The path name has to be provided w/o quotes

- fraction of total RAM: Supports in turns 3 different formats:

    * If provided as an integer >1: It is interpreted as the actual 
      user-specified maximum amount of RAM to be used in total at 
      the target node
      
    * If provided as an integer =0: It will deploy as maximum engines 
      as possible, according to the free RAM available at target node
      
    * If provided as a float between 0 and 1: It is interpreted as 
      the percentage of free RAM that the cluster can use in total 
      at the target node
    
- RAM per engine: Integer, which is interpreted as the required memory 
  per engine in MB (default is 512MB) 
  
It is also possible to add comments, by using the # character at the 
beginning of the line. Example:

#####################################################

# CASA cluster configuration file for expert user
orion, 10, /home/jdoe/test/myclusterhome1
m42, 4, /home/jdoe/test/myclusterhome2, 0.6, 1024
antares, 0.6, /home/jdoe/test/myclusterhome3, 0, 2048

#####################################################

- At host ``orion'': It will deploy up to 10 engines, with working 
  directory /home/jdoe/test/myclusterhome1, and using as much free 
  RAM available as possible (up to 90% by default), taking into 
  account that each engine can use up to 512 MB (the default and minimum)
  
- At host ``m42'': It will deploy up to 4 engines, with working directory 
  /home/jdoe/test/myclusterhome2, and using at the most 60% of the free RAM 
  available, taking into account that each engine can use up to 1024 MB.   
  
- At host ``antares'': It will deploy as many engines as possible, with 
  working directory /home/jdoe/test/myclusterhome3, using up to 60% of the 
  idle CPU capacity / cores, and as much free RAM available as possible 
  (up to 90% by default), taking into account that each engine can use up 
  to 2048 MB.  
 
Normally, one does not call this function directly. 
The init_cluster function will trigger this function.

Definition at line 111 of file simple_cluster.py.

References simple_cluster.simple_cluster.__default_cpu_fraction, simple_cluster.simple_cluster.__default_mem_fraction, simple_cluster.simple_cluster.__default_mem_per_engine, simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, simple_cluster.simple_cluster._jobs, simple_cluster.simple_cluster.check_host_resources(), and simple_cluster.simple_cluster.validate_hosts().

Referenced by simple_cluster.simple_cluster.init_cluster().

def simple_cluster.simple_cluster.create_project (   self,
  proj = "" 
)

project management

Create a project. 

Keyword arguments:
proj -- the name of project (default: 'proj'+timestamp).  

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

This function is normally called internally by init_cluster function. 

Example:
CASA <33>: sl.list_projects
host: casa-dev-07 ------------------------>>>>
bProj  bsplit  csplit  my_project
host: casa-dev-08 ------------------------>>>>
bProj  bsplit  csplit  my_project
host: casa-dev-10 ------------------------>>>>
bProj  bsplit  csplit

CASA <34>: sl.create_project('dflag')
output directory:
/home/casa-dev-07/hye/ptest/dflag
/home/casa-dev-08/hye/ptest/dflag
/home/casa-dev-10/hye/ptest/dflag

CASA <36>: sl.list_projects
host: casa-dev-07 ------------------------>>>>
bProj  bsplit  csplit  dflag  my_project
host: casa-dev-08 ------------------------>>>>
bProj  bsplit  csplit  dflag  my_project
host: casa-dev-10 ------------------------>>>>
bProj  bsplit  csplit  dflag

Definition at line 457 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, and simple_cluster.simple_cluster._project.

Referenced by simple_cluster.simple_cluster.init_cluster().

def simple_cluster.simple_cluster.do_and_record (   self,
  cmd,
  id,
  group = '',
  subMS = '' 
)
Submit a function call to an engine and record its execution status.

Keyword arguments:
cmd -- the function call string
id -- the id of the engine to be assigned 
group -- the group this cmd belongs to and the receipt of notification
 group can be an email address or a label of the job or both
 separated by a ':'. Once all the jobs that has the same label
 finish, an email notification will be sent.

Example:
CASA <12>: param=dict()
CASA <13>: param['vis']='NGC5921.ms'
CASA <14>: param['spw']='4'
CASA <15>: cmd=sl.make_call('flagdata', param)
CASA <17>: sl.do_and_record(cmd, 7, 'you@nrao.edu:flag ngc5921')

Definition at line 1624 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._enginesRWoffsets, simple_cluster.simple_cluster._job_title, and simple_cluster.simple_cluster._jobs.

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.do_project (   self,
  proj 
)
Use a project previously created. 

Keyword arguments:
proj -- the name of project.

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
CASA <38>: sl._project
Out[38]: 'dflag'

CASA <39>: sl.do_project('csplit')
output directory:
/home/casa-dev-07/hye/ptest/csplit
/home/casa-dev-08/hye/ptest/csplit
/home/casa-dev-10/hye/ptest/csplit

CASA <40>: sl._project
Out[40]: 'csplit'

Definition at line 512 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, and simple_cluster.simple_cluster._project.

def simple_cluster.simple_cluster.erase_project (   self,
  proj 
)
Erase files and dirs of a project. 

Keyword arguments:
proj -- the name of project.

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
CASA <30>: sl.list_projects
host: casa-dev-07 ------------------------>>>>
aNew  bProj  bsplit  csplit  my_project
host: casa-dev-08 ------------------------>>>>
aNew  bProj  bsplit  csplit  my_project
host: casa-dev-10 ------------------------>>>>
bProj  bsplit  csplit

CASA <31>: sl.erase_project('aNew')

CASA <32>: sl.list_projects
host: casa-dev-07 ------------------------>>>>
bProj  bsplit  csplit  my_project
host: casa-dev-08 ------------------------>>>>
bProj  bsplit  csplit  my_project
host: casa-dev-10 ------------------------>>>>
bProj  bsplit  csplit

Definition at line 556 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, and simple_cluster.simple_cluster._project.

Erase all previous results of all projects

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Definition at line 675 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, and simple_cluster.simple_cluster._project.

read the project.result file and erase result for a label

Keyword arguments:
tm -- the result label

Example:
CASA <33>: sl.list_result
Out[33]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####',
 '#### flag ngc5921 #################### Wed Mar 16 10:43:12 2011 ####']
CASA <34>: sl.erase_result('flag ngc5921')
CASA <35>: sl.list_result
Out[35]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####']

Definition at line 1732 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._project.

get the diameter of antennas

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <19>: sl.get_antenna_diam(vis)
  Out[19]: 25.0

Definition at line 2081 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.simple_clean().

Get the root path where an engine writes out result

Keyword arguments:
id -- the engine id

Example:
CASA <52>: sl.get_engine_store(8)
Out[52]: '/home/casa-dev-10/hye/ptest/bProj/'

Definition at line 863 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._hosts, simple_cluster.simple_cluster._project, and simple_cluster.simple_cluster.get_host().

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

get source

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <40>: sl.get_field_desc(vis)
  Out[40]:
{(0, 0): {'cost': [0, 82],
  'desc': 0,
  'field': 0,
  'nchan': 64,
  'ncorr': 4,
  'nrows': 16848,
  'pol': 0,
  'spw': 0},
 (0, 1): {'cost': [0, 82],
  'desc': 1,
  'field': 0,
  'nchan': 64,
  'ncorr': 4,
  'nrows': 16848,
  'pol': 0,
  'spw': 1}
 ... ommit ...
}

Definition at line 2318 of file simple_cluster.py.

References simple_cluster.simple_cluster.get_pol_corr(), simple_cluster.simple_cluster.get_pol_id(), simple_cluster.simple_cluster.get_spw_chan(), and simple_cluster.simple_cluster.get_spw_id().

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.get_field_name (   self,
  vis,
  id 
)
get the name of a field 

Keyword arguments:
vis -- the path+name of visibility data
id -- the field id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <35>: sl.get_field_name(vis, 5)
  Out[35]: 'J0738+1742'

Definition at line 2210 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.get_host (   self,
  id 
)
Find out the name of the node that hosts this engine.

Keyword arguments:
id -- the engine id

Example:
CASA <50>: sl.get_host(8)
Out[50]: 'casa-dev-10'

Definition at line 835 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone.

Referenced by simple_cluster.simple_cluster.get_engine_store().

List current cluster.

CASA <48>: sl.get_hosts
Out[48]:
[['casa-dev-07', 4, '/home/casa-dev-07/hye/ptest'],
 ['casa-dev-08', 4, '/home/casa-dev-08/hye/ptest'],
 ['casa-dev-10', 4, '/home/casa-dev-10/hye/ptest']]

Definition at line 748 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._hosts.

Referenced by simple_cluster.simple_cluster.use_engines(), simple_cluster.simple_cluster.use_hosts(), and simple_cluster.simple_cluster.use_paths().

def simple_cluster.simple_cluster.get_jobId (   self,
  status 
)
Get a list of jobs of the given status

Keyword arguments:
status -- the job status or the job title

Example:
CASA <2>: from simple_cluster import simple_cluster
CASA <3>: sl=simple_cluster()
CASA <4>: sl.init_cluster("my_cluster", "csplit")
CASA <5>: sl.simple_split('/lustre/casa-store/hye/10B-209a_5s.ms/', 
                  'you@nrao.edu:3rd split')
CASA <6>: sl.get_jobId('done')

Definition at line 1539 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._jobs.

get the mean reference frequency

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <20>: sl.get_mean_reff(vis)
  Out[20]: 6298222222.2222223

Definition at line 2102 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.simple_clean().

def simple_cluster.simple_cluster.get_msname (   self,
  vis 
)

ms knowledge functions

get the ms name of given vis

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <16>: sl.get_msname(vis)
  Out[18]: '10B-209a_5s'

Definition at line 2062 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.get_num_desc (   self,
  vis 
)
get number of data descriptions

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <37>: sl.get_num_desc(vis)
  Out[37]: 18L

Definition at line 2252 of file simple_cluster.py.

get the number of fields 

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <32>: sl.get_num_field(vis)
  Out[32]: 6L

Definition at line 2192 of file simple_cluster.py.

def simple_cluster.simple_cluster.get_num_spw (   self,
  vis 
)
get the number of spectral windows 

Keyword arguments:
vis -- the path+name of visibility data

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <36>: sl.get_num_spw(vis)
  Out[36]: 18L

Definition at line 2234 of file simple_cluster.py.

def simple_cluster.simple_cluster.get_output (   self,
  result,
  item,
  kwargs 
)
pick from result list the item that meets condistion in kwargs

Keyword arguments:
result -- the result label or the result from running get_result 
item -- the result item to get
kwargs -- the conditions to limit the result

Example:
CASA <33>: sl.list_result
  Out[33]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####',
 '#### flag ngc5921 #################### Wed Mar 16 10:43:12 2011 ####']
CASA <34>: sl.get_result('new split')
  Out[34]:
['10      done       30  split(vis="/lustre/casa-store/hye/10B-209a_5s.ms", outputvis="/home/casa-dev-10/hye/ptest/csplit/10B-209a_5s-f5-s10.ms", spw="10", datacolumn="DATA", field="J0738+1742")',
 '1      done       40  split(vis="/lustre/casa-store/hye/10B-209a_5s.ms", outputvis="/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f3-s2.ms", spw="2", datacolumn="DATA", field="J0738+1742")',
 '2      done       75  split(vis="/lustre/casa-store/hye/10B-209a_5s.ms", outputvis="/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f4-s10.ms", spw="10", datacolumn="DATA", field="2MJ0746")',
 ..... many other entries ...]
CASA <35>: sl.get_output('new split', 'outputvis', field='3C84')
  Out[35]:
['/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f1-s8.ms',
 '/home/casa-dev-10/hye/ptest/csplit/10B-209a_5s-f1-s14.ms',
 '/home/casa-dev-08/hye/ptest/csplit/10B-209a_5s-f0-s0.ms']

Definition at line 1772 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, parallel_go.cluster.get_result(), and simple_cluster.simple_cluster.get_result().

def simple_cluster.simple_cluster.get_pol_corr (   self,
  vis,
  pol = 0 
)
get the number of coorelation of polarization 

Keyword arguments:
vis -- the path+name of visibility data
pol -- the polarization id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <31>: sl.get_pol_corr(vis, 0)
  Out[31]: 4

Definition at line 2168 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.get_field_desc().

def simple_cluster.simple_cluster.get_pol_id (   self,
  vis,
  desc = 0 
)
get polarization id for desc

Keyword arguments:
vis -- the path+name of visibility data
desc -- the data description id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <39>: sl.get_pol_id(vis, 17)
  Out[39]: 0

Definition at line 2294 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.get_field_desc().

def simple_cluster.simple_cluster.get_result (   self,
  tm 
)
read the project.result file and write out result for a label

Keyword arguments:
tm -- the result label

Example:
CASA <33>: sl.list_result
Out[33]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####',
 '#### flag ngc5921 #################### Wed Mar 16 10:43:12 2011 ####']
CASA <34>: sl.get_result('new split')
Out[34]:
....ommit...

Definition at line 1696 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._project.

Referenced by simple_cluster.simple_cluster.get_output().

jagonzal (CAS-4376): Gather return variables from the different engines back to the main CASA controller instance

Definition at line 1255 of file simple_cluster.py.

def simple_cluster.simple_cluster.get_spw_chan (   self,
  vis,
  spw = 0 
)
get the number of channels of spw

Keyword arguments:
vis -- the path+name of visibility data
spw -- the spectral window id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <24>: sl.get_spw_chan(vis, 8)
  Out[24]: 64

Definition at line 2144 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.get_field_desc(), and simple_cluster.simple_cluster.simple_clean().

def simple_cluster.simple_cluster.get_spw_id (   self,
  vis,
  desc = 0 
)
get spectral window id for desc

Keyword arguments:
vis -- the path+name of visibility data
desc -- the data description id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <38>: sl.get_spw_id(vis, 17)
  Out[38]: 17

Definition at line 2270 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.get_field_desc().

def simple_cluster.simple_cluster.get_spw_reff (   self,
  vis,
  spw = 0 
)
get the reference frequency of spw

Keyword arguments:
vis -- the path+name of visibility data
spw -- the spectral window id

Example:
CASA <15>: vis="/lustre/casa-store/hye/10B-209a_5s.ms"
CASA <21>: sl.get_spw_reff(vis, 8)
  Out[21]: 5056000000.0

Definition at line 2120 of file simple_cluster.py.

def simple_cluster.simple_cluster.get_status (   self,
  long = False 
)
Display job execution status.

Keyword arguments:
long -- whether or not to display detailed execution status info

Example:
CASA <2>: from simple_cluster import simple_cluster
CASA <3>: sl=simple_cluster()
CASA <4>: sl.init_cluster("my_cluster", "csplit")
CASA <5>: sl.simple_split('/lustre/casa-store/hye/10B-209a_5s.ms/', 
                  'you@nrao.edu:3rd split')
CASA <6>: sl.get_status()
engine    status  time(s)     start  command   title
     0      done       31  16:41:56    split      15
     2 scheduled        0              split      78
     7      done       41  16:42:38    split      16
     9   running       51  16:42:59    split      17
     1      done       36  16:41:56    split      18

Definition at line 1497 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._jobs.

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

Static method that returns whatever the current definition of a cluster is.

If none is defined the default cluster is created, initialized and returned.

Definition at line 76 of file simple_cluster.py.

def simple_cluster.simple_cluster.getVariables (   self,
  varList,
  engine 
)
This method will return a list corresponding to all variables
in the varList for the specified engine. This is a
very thin wrapper around the pull method in the cluster.

Definition at line 1837 of file simple_cluster.py.

def simple_cluster.simple_cluster.init_cluster (   self,
  clusterfile = '',
  project = '' 
)

setup -

Setup the cluster

Keyword arguments:
clusterfile -- the cluster definition file
project -- the name of project (default: 'proj'+timestamp).  

A configuration file is an ASCII text file. Each line defines a node
(also called host) with one line per host to be used, and the following 
format: 

- <hostname>, <number of engines>, <work directory>
- <hostname>, <number of engines>, <work directory>, <fraction of total RAM>
- <hostname>, <number of engines>, <work directory>, <fraction of total RAM>, <RAM per engine>

where the interpretation of the parameters is as follows: 

- hostname: Hostname of the target node where the cluster is deployed 

  NOTE: The hostname has to be provided w/o quotes

- number of engines: Supports in turns 3 different formats 

    * If provided as an integer >1: It is interpreted as 
      the actual user-specified maximum number of engines
      
    * If provided as an integer =0: It will deploy as maximum 
      engines as possible, according to the idle CPU capacity 
      available at the target node
      
    * If provided as a float between 0 and 1: It is interpreted 
      as the percentage of idle CPU capacity that the cluster 
      can use in total at the target node

- work directory: Area in which the cluster will put intermediate 
  files such as log files, configuration files, and monitoring files

  NOTE1: This area has to be accessible from the controller (user) machine, 
 and mounted in the same path of the filesystem 
 
  NOTE2: The path name has to be provided w/o quotes

- fraction of total RAM: Supports in turns 3 different formats:

    * If provided as an integer >1: It is interpreted as the actual 
      user-specified maximum amount of RAM to be used in total at 
      the target node
      
    * If provided as an integer =0: It will deploy as maximum engines 
      as possible, according to the free RAM available at target node
      
    * If provided as a float between 0 and 1: It is interpreted as 
      the percentage of free RAM that the cluster can use in total 
      at the target node
    
- RAM per engine: Integer, which is interpreted as the required memory 
  per engine in MB (default is 512MB) 
  
It is also possible to add comments, by using the # character at the 
beginning of the line. Example:

#####################################################

# CASA cluster configuration file for expert user
orion, 10, /home/jdoe/test/myclusterhome1
m42, 4, /home/jdoe/test/myclusterhome2, 0.6, 1024
antares, 0.6, /home/jdoe/test/myclusterhome3, 0, 2048

#####################################################

- At host ``orion'': It will deploy up to 10 engines, with working 
  directory /home/jdoe/test/myclusterhome1, and using as much free 
  RAM available as possible (up to 90% by default), taking into 
  account that each engine can use up to 512 MB (the default and minimum)
  
- At host ``m42'': It will deploy up to 4 engines, with working directory 
  /home/jdoe/test/myclusterhome2, and using at the most 60% of the free RAM 
  available, taking into account that each engine can use up to 1024 MB.   
  
- At host ``antares'': It will deploy as many engines as possible, with 
  working directory /home/jdoe/test/myclusterhome3, using up to 60% of the 
  idle CPU capacity / cores, and as much free RAM available as possible 
  (up to 90% by default), taking into account that each engine can use up 
  to 2048 MB.  

Example:
CASA <15>: from simple_cluster import *
CASA <16>: sl=simple_cluster()
CASA <17>: sl.init_cluster('cluster-config.txt', 'ProjectName')

Definition at line 2390 of file simple_cluster.py.

References simple_cluster.simple_cluster.__localCluster, simple_cluster.simple_cluster.__running, simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster.config_cluster(), simple_cluster.simple_cluster.create_project(), parallel_go.cluster.start_cluster(), and simple_cluster.simple_cluster.start_cluster().

Definition at line 105 of file simple_cluster.py.

References simple_cluster.simple_cluster.__running.

def simple_cluster.simple_cluster.list_project (   self,
  proj 
)
List previous results of the proj

Keyword arguments:
proj -- the name of project.

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
CASA <19>: sl.list_project('bsplit')
host: casa-dev-07 ------------------------>>>>
test_regression_TDEM0003-f0-s11.ms   test_regression_TDEM0003-f0-s13.ms
host: casa-dev-08 ------------------------>>>>
test_regression_TDEM0003-f0-s10.ms  test_regression_TDEM0003-f3-s9.ms
host: casa-dev-10 ------------------------>>>>
test_regression_TDEM0003-f0-s0.ms   test_regression_TDEM0003-f4-s11.ms

Definition at line 642 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._hosts.

List all previous projects

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
<CASA 16:>sl.list_projects
host: casa-dev-07 ------------------------>>>>
aNew  bProj  bsplit  csplit  my_project
host: casa-dev-08 ------------------------>>>>
aNew  bProj  bsplit  csplit  my_project
host: casa-dev-10 ------------------------>>>>
bProj  bsplit  csplit

Definition at line 690 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._hosts.

result processing functions

read the project.result file and write out all labels

Example:
CASA <33>: sl.list_result
Out[33]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####',
 '#### flag ngc5921 #################### Wed Mar 16 10:43:12 2011 ####']

Definition at line 1672 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._project.

def simple_cluster.simple_cluster.make_call (   self,
  func,
  param 
)

job distribution functions

Make a function call string with function name and parameters.

Keyword arguments:
func -- the name of the function
param -- the dictionary of parameters and values

Example:
CASA <12>: param=dict()
CASA <13>: param['vis']='NGC5921.ms'
CASA <14>: param['spw']='4'
CASA <15>: sl.make_call('flagdata', param)
  Out[15]: 'flagdata(vis="NGC5921.ms", spw=4)'

Definition at line 1589 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.simple_clean(), and simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.remove_record (   self,
  jobname = None 
)
Remove job execution status of a job.

Keyword arguments:
jobname -- the jobname or status of job(s) to be removed from display

if jobName is not specified or is None all jobs are removed.

Definition at line 1566 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster._jobs.

Referenced by simple_cluster.simple_cluster.check_status().

Erase previous result and reset the status current project.

A project maintains a subdirectory under each node's work_dir. All 
output files of an engine hosted on that node will by default store
under the subdirectory.

Example:
CASA <43>: sl.list_project('bProj')
....too many here...

CASA <44>: sl.do_project('bProj')
output directory:
/home/casa-dev-07/hye/ptest/bProj
/home/casa-dev-08/hye/ptest/bProj
/home/casa-dev-10/hye/ptest/bProj

CASA <45>: sl.list_project('bProj')
host: casa-dev-07 ------------------------>>>>
host: casa-dev-08 ------------------------>>>>
host: casa-dev-10 ------------------------>>>>

Definition at line 717 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._job_title, simple_cluster.simple_cluster._jobs, simple_cluster.simple_cluster._monitor_on, simple_cluster.simple_cluster._project, simple_cluster.simple_cluster.clear_project(), and simple_cluster.simple_cluster.stop_monitor().

def simple_cluster.simple_cluster.setDefaults (   default_mem_per_engine = 512,
  default_mem_fraction = 0.9,
  default_cpu_fraction = 0.9 
) [static]

Definition at line 99 of file simple_cluster.py.

def simple_cluster.simple_cluster.shell (   self,
  hostname 
)
Creates the command line to execute the give command on the given host.
If and only if the host is not localhost, ssh is used.

Definition at line 47 of file simple_cluster.py.

References simple_cluster.simple_cluster.uniqueIP().

Referenced by simple_cluster.simple_cluster.check_host_resources(), simple_cluster.simple_cluster.check_resource(), and simple_cluster.simple_cluster.cold_start().

Display job queue.

Example:
CASA <2>: from simple_cluster import simple_cluster
CASA <3>: sl=simple_cluster()
CASA <4>: sl.init_cluster("my_cluster", "csplit")
CASA <5>: sl.simple_split('/lustre/casa-store/hye/10B-209a_5s.ms/', 
                  'you@nrao.edu:3rd split')
CASA <6>: sl.show_queue()

Definition at line 1481 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone.

def simple_cluster.simple_cluster.show_resource (   self,
  long = False 
)
jagonzal (CAS-4372): Old resource monitoring functions were causing crashes in NRAO cluster

Definition at line 964 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, and simple_cluster.simple_cluster.check_resource().

Referenced by simple_cluster.simple_cluster.start_cluster().

def simple_cluster.simple_cluster.simple_clean (   self,
  vs,
  nx,
  ny,
  mode = 'channel',
  email = '' 
)

example to distribute clean task over engines

Make images with a simple cluster

Keyword arguments:
vs -- the visibility data
nx, ny -- the size (pixels) of the image
mode -- either 'channel' or 'continuum'
email -- the email address to notify the completion

Example:
CASA <15>: from simple_cluster import *
CASA <16>: sl=simple_cluster()
CASA <17>: sl.init_cluster('my_cluster', 'aProj')
CASA <18>: simple_clean(
    vis='/home/casa-dev-09/hye/ptest/sim.alma.csv.mid.ms', 
    nx=256, ny=256, mode='channel')

Definition at line 2526 of file simple_cluster.py.

References simple_cluster.simple_cluster.do_and_record(), simple_cluster.simple_cluster.get_antenna_diam(), simple_cluster.simple_cluster.get_engine_store(), simple_cluster.simple_cluster.get_field_desc(), simple_cluster.simple_cluster.get_mean_reff(), simple_cluster.simple_cluster.get_msname(), simple_cluster.simple_cluster.get_spw_chan(), simple_cluster.simple_cluster.get_status(), and simple_cluster.simple_cluster.make_call().

def simple_cluster.simple_cluster.simple_split (   self,
  vs,
  email 
)
split by source (field, spw) with parallel engines

Keyword arguments:
vs -- the visibility data
email -- the email address to notify the completion

Example:
CASA <15>: from simple_cluster import *
CASA <16>: sl=simple_cluster()
CASA <17>: sl.init_cluster('my_cluster', 'aProj')
CASA <18): vis='/home/casa-dev-09/hye/ptest/sim.alma.csv.mid.ms', 
CASA <18>: simple_split(vis)

Definition at line 2628 of file simple_cluster.py.

References simple_cluster.simple_cluster.do_and_record(), simple_cluster.simple_cluster.get_engine_store(), simple_cluster.simple_cluster.get_field_desc(), simple_cluster.simple_cluster.get_field_name(), simple_cluster.simple_cluster.get_msname(), simple_cluster.simple_cluster.get_status(), simple_cluster.simple_cluster.make_call(), and simple_cluster.simple_cluster.use_engines().

log management

Link all engine logs to the current directory. 

After running this, the current directory contains links to each of
the engine logs with file name 'engine-[id].log such that one can 
conveniently browse engine logs with casa logviewer.

Definition at line 888 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone.

Referenced by simple_cluster.simple_cluster.start_cluster().

Start monitoring  execution status of submitted no-block jobs

Normally, one does not call this function directly. The init_cluster
will call this function.

Definition at line 1457 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._monitor_on, and simple_cluster.simple_cluster.check_status().

Referenced by simple_cluster.simple_cluster.start_cluster().

resource management

Start monitoring resource usage.

Four critical resource usage indicators (for parallel execution), 
namely, %cpu, %iowait, %mem and %memswap on all hosts are continuously
checked. This infomation can be used to tune the parallel performance.

Normally, one does not call this function directly. The init_cluster
will call this function.

Definition at line 907 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._resource_on, simple_cluster.simple_cluster._rsrc, and simple_cluster.simple_cluster.update_resource().

Destructor method to shut down the cluster gracefully 

Definition at line 786 of file simple_cluster.py.

References simple_cluster.simple_cluster.__running, and simple_cluster.simple_cluster.stop_monitor().

Referenced by simple_cluster.simple_cluster.cold_start().

Stop monitoring execution status of submitted no-block jobs

Normally, one does not call this function directly. 

Definition at line 1469 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._monitor_on, and simple_cluster.simple_cluster._monitor_running.

Referenced by simple_cluster.simple_cluster.reset_project(), and simple_cluster.simple_cluster.stop_cluster().

Stop all engines on all hosts of current cluster.

After running this, the cluster contains no engines.

Definition at line 795 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone.

Stop monitoring resource usage.

Four critical resource usage indicators (for parallel execution), 
namely, %cpu, %iowait, %mem and %memswap on all hosts are continuously
checked. This infomation can be used to tune the parallel performance.

Normally, one does not call this function directly. The init_cluster
will call this function.

Definition at line 946 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._resource_on, simple_cluster.simple_cluster._resource_running, and simple_cluster.simple_cluster._rsrc.

def simple_cluster.simple_cluster.uniqueIP (   self,
  hostname 
)
Returns a unique IP address of the given hostname,
i.e. not 127.0.0.1 for localhost but localhost's global IP

Definition at line 59 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.shell(), and simple_cluster.simple_cluster.validate_hosts().

Set up repeated resource checking.

Four critical resource usage indicators (for parallel execution), 
namely, %cpu, %iowait, %mem and %memswap on all hosts are continuously
checked. This infomation can be used to tune the parallel performance.

Normally, one does not call this function directly. The init_cluster
will call this function.

Definition at line 925 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, simple_cluster.simple_cluster._resource_on, simple_cluster.simple_cluster._resource_running, and simple_cluster.simple_cluster.check_resource().

Referenced by simple_cluster.simple_cluster.start_resource().

def simple_cluster.simple_cluster.use_engines (   self,
  use_id = [],
  spreadhost = 1 
)
use engines on from a given list

Keyword arguments:
use_id -- the list of engine ids
spreadhost -- whether to apply host first policy

Example:
CASA <52>: sl._cluster.get_ids()
  Out[52]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
CASA <54>: sl.use_engines([0, 1, 2, 9])
  Out[54]: [0, 1, 2, 9]
CASA <55>: sl.use_engines()
  Out[55]: [4, 8, 0, 5, 9, 1, 6, 10, 2, 7, 11, 3]

Definition at line 2005 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, nf_regression.append, and simple_cluster.simple_cluster.get_hosts().

Referenced by simple_cluster.simple_cluster.simple_split().

def simple_cluster.simple_cluster.use_hosts (   self,
  host_list = [],
  engines_each = 0 
)
use engines on the given nodes

Keyword arguments:
host_list -- the list of hosts
engines_each -- number of engines to use on each host

Example:
CASA <45>: sl.get_hosts
  Out[45]:
  [['casa-dev-07', 4, '/home/casa-dev-07/hye/ptest'],
   ['casa-dev-08', 4, '/home/casa-dev-08/hye/ptest'],
   ['casa-dev-10', 4, '/home/casa-dev-10/hye/ptest']]
CASA <46>: sl.use_hosts(['casa-dev-07', 'casa-dev-10'], 2)
  Out[46]: [8, 9, 0, 1]

Definition at line 1935 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, nf_regression.append, and simple_cluster.simple_cluster.get_hosts().

def simple_cluster.simple_cluster.use_paths (   self,
  dir_list = [] 
)

engine selection functions

use engines that most close to the dirs (or ms)

Keyword arguments:
dir_list -- the result label

Example:
CASA <33>: sl.list_result
Out[33]:
['#### new split ####################### Mon Mar 14 14:48:08 2011 ####',
CASA <34>: sl.get_output('new split', 'outputvis', field='3C84')
  Out[34]:
['/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f1-s8.ms',
 '/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f0-s1.ms',
 '/home/casa-dev-07/hye/ptest/csplit/10B-209a_5s-f1-s9.ms']
CASA <35>: sl.use_paths(
     '/home/casa-dev-10/hye/ptest/csplit/10B-209a_5s-f1-s14.ms')
  Out[35]: [8]

Definition at line 1858 of file simple_cluster.py.

References simple_cluster.simple_cluster._configdone, nf_regression.append, and simple_cluster.simple_cluster.get_hosts().

Validate the cluster specification.

This function is normally called internally by configure_cluster
function. 

Definition at line 406 of file simple_cluster.py.

References simple_cluster.simple_cluster._hosts, and simple_cluster.simple_cluster.uniqueIP().

Referenced by simple_cluster.simple_cluster.config_cluster().


Member Data Documentation

Definition at line 21 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.config_cluster().

Definition at line 20 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.config_cluster().

Definition at line 19 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.config_cluster().

Definition at line 41 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.init_cluster().

Definition at line 38 of file simple_cluster.py.

jagonzal (CAS-4276): New cluster specification file ###

Definition at line 37 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.check_job(), simple_cluster.simple_cluster.check_status(), simple_cluster.simple_cluster.clear_project(), simple_cluster.simple_cluster.cold_start(), simple_cluster.simple_cluster.config_cluster(), simple_cluster.simple_cluster.create_project(), simple_cluster.simple_cluster.do_and_record(), simple_cluster.simple_cluster.do_project(), simple_cluster.simple_cluster.erase_project(), simple_cluster.simple_cluster.erase_projects(), simple_cluster.simple_cluster.erase_result(), simple_cluster.simple_cluster.get_engine_store(), simple_cluster.simple_cluster.get_host(), simple_cluster.simple_cluster.get_hosts(), simple_cluster.simple_cluster.get_jobId(), simple_cluster.simple_cluster.get_output(), simple_cluster.simple_cluster.get_result(), simple_cluster.simple_cluster.get_status(), simple_cluster.simple_cluster.init_cluster(), simple_cluster.simple_cluster.list_project(), simple_cluster.simple_cluster.list_projects(), simple_cluster.simple_cluster.list_result(), simple_cluster.simple_cluster.remove_record(), simple_cluster.simple_cluster.reset_project(), simple_cluster.simple_cluster.show_queue(), simple_cluster.simple_cluster.show_resource(), simple_cluster.simple_cluster.start_cluster(), simple_cluster.simple_cluster.start_logger(), simple_cluster.simple_cluster.start_monitor(), simple_cluster.simple_cluster.start_resource(), simple_cluster.simple_cluster.stop_monitor(), simple_cluster.simple_cluster.stop_nodes(), simple_cluster.simple_cluster.stop_resource(), simple_cluster.simple_cluster.update_resource(), simple_cluster.simple_cluster.use_engines(), simple_cluster.simple_cluster.use_hosts(), and simple_cluster.simple_cluster.use_paths().

Definition at line 39 of file simple_cluster.py.

Definition at line 33 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.check_resource().

Definition at line 34 of file simple_cluster.py.

Referenced by simple_cluster.simple_cluster.check_resource().


The documentation for this class was generated from the following file: