<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://csml-wiki.northwestern.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wzhwei</id>
	<title>csml-wiki.northwestern.edu - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://csml-wiki.northwestern.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wzhwei"/>
	<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php/Special:Contributions/Wzhwei"/>
	<updated>2026-04-09T02:59:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.2</generator>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Simulations&amp;diff=617</id>
		<title>Simulations</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Simulations&amp;diff=617"/>
		<updated>2016-04-10T04:17:08Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Compute RDF using &amp;#039;rerun&amp;#039; command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Assorted topics relevant to programming particle-based simulation codes and to using these codes for the modeling of a wide range of systems, notably complex fluids.&lt;br /&gt;
&lt;br /&gt;
== Molecular dynamics simulations ==&lt;br /&gt;
&lt;br /&gt;
For many projects, the CSML uses [http://lammps.sandia.gov/ LAMMPS] for molecular dynamics, although (depending on the application) we also employ [http://www.ks.uiuc.edu/Research/namd/ NAMD] and [http://www.gromacs.org/ GROMACS]. &lt;br /&gt;
&lt;br /&gt;
For educational purposes, Moldy is strongly recommended, not in the least because it offers an excellent [http://ariadne.ms.northwestern.edu/Moldy/moldy-manual.pdf manual].  For class purposes, we maintain a separate list of [http://ariadne.ms.northwestern.edu/Moldy/moldy_homework.html executables] for various operating systems.&lt;br /&gt;
&lt;br /&gt;
Many questions regarding LAMMPS can be resolved by consulting the [http://lammps.sandia.gov/doc/Manual.html manual], but we address some common problems below.&lt;br /&gt;
&lt;br /&gt;
===LAMMPS Special Usage Notes===&lt;br /&gt;
&lt;br /&gt;
====Temperature Normalization====&lt;br /&gt;
&lt;br /&gt;
By default LAMMPS normalizes the temperature by an amount &#039;&#039;n&#039;&#039;&amp;lt;sub&amp;gt;dof&amp;lt;/sub&amp;gt; - &#039;&#039;d&#039;&#039;, where &#039;&#039;n&#039;&#039;&amp;lt;sub&amp;gt;dof&amp;lt;/sub&amp;gt;&lt;br /&gt;
is the system&#039;s total number of degrees of freedom and &#039;&#039;d&#039;&#039; its dimensionality. Subtracting &#039;&#039;d&#039;&#039; accounts for the center-of-mass motion of the system. This leads to an incorrect reported value if the system has a proper frame of reference, e.g., when using a [http://lammps.sandia.gov/doc/fix_langevin.html Langevin thermostat] in which all particles interact with a stationary background solvent. In this case it is necessary to ensure &#039;&#039;n&#039;&#039;&amp;lt;sub&amp;gt;dof&amp;lt;/sub&amp;gt; is used instead of &#039;&#039;n&#039;&#039;&amp;lt;sub&amp;gt;dof&amp;lt;/sub&amp;gt; - &#039;&#039;d&#039;&#039;. To do this, use [http://lammps.sandia.gov/doc/compute_modify.html compute_modify] as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
compute myTemp all temp&lt;br /&gt;
compute_modify myTemp extra 0&lt;br /&gt;
thermo_modify temp myTemp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a note, the above only affects the reported temperature. The dynamics are computed correctly regardless.&lt;br /&gt;
&lt;br /&gt;
====Compute RDF using &#039;rerun&#039; command====&lt;br /&gt;
&lt;br /&gt;
The &#039;rerun&#039; command in LAMMPS performs a post-processing simulation by reading the atom information line-by-line from the dump file(s) created from a previous simulation. The command syntax is as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt; rerun file1 file2 ... keyword args ... &amp;lt;/pre&amp;gt;&lt;br /&gt;
A detailed description of the syntax can be found on the [http://lammps.sandia.gov/doc/rerun.html LAMMPS website].&lt;br /&gt;
&lt;br /&gt;
Besides the fact that the atoms&#039; positions (and possibly velocities, etc.) are pre-determined from the dump file(s), we use the rerun command as if we are running a normal simulation (with some differences and limitations, explained below). When the rerun command is called, it invokes the [http://lammps.sandia.gov/doc/read_dump.html read_dump] command to read in lines from the dumpfile(s) line-by-line, each time invoking the [http://lammps.sandia.gov/doc/run.html run] command to output computed energy, forces, and any thermo output or diagnostic info the user has defined. Thus, in the input file for this pseudo simulation, we must define a system, units, dimensions, box, etc, and these will typically be identical to the original simulation.&lt;br /&gt;
&lt;br /&gt;
Commands from the original simulation that will not be included are ones such as dump commands and time integration fixes (e.g. fix nve; rerun only looks at single moments in time and cannot perform time integration). Fixes that constrain forces on atoms (such as fix langevin) can be invoked in general, but it does not make sense to do this for computing the RDF (even though the langevin thermostat may be employed in the original simulation).&lt;br /&gt;
&lt;br /&gt;
As an example, let us consider computing the RDF for a typical Lennard-Jones fluid past the interaction cutoff, and let us assume that we have already generated a dumpfile containing information on the atom positions over some set of timesteps. Then we will run a second simulation that reads in the particle positions from the dumpfile(s) over some subset of the original recorded timesteps ([http://lammps.sandia.gov/doc/rerun.html see arguments for the rerun command]), and will compute and output the RDF with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
compute    rdfID groupID rdf N #computes rdf with N bins&lt;br /&gt;
fix        fixID groupID ave/time Nevery Nrepeat Nfreq c_[rdfID] file rdf.dat mode vector # see note below&lt;br /&gt;
rerun      dump.dat dump x y z # &#039;dump.dat&#039; is the dumpfile to be read &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;i&amp;gt;&lt;br /&gt;
Notes:&lt;br /&gt;
* Since the [http://lammps.sandia.gov/doc/compute_rdf.html compute rdf] command will only compute the RDF up to the interaction cutoff distance, we must change this parameter in the [http://lammps.sandia.gov/doc/pair_style.html pair_style] and [http://lammps.sandia.gov/doc/pair_coeff.html pair_coeff] commands so that we can obtain the RDF over the desired domain (i.e. if we want to compute the RDF up to a cutoff of 4.0, we would set the &#039;cutoff&#039; arguments in those commands to 4.0).&lt;br /&gt;
* While the rerun command creates a set of atoms at every snapshot of the dumpfile that it reads, the compute rdf command expects a set of atoms to be present at the start of the rerun simulation (remember, the compute command comes before the rerun command) and will produce an error if no atoms are present. To avoid this, one can use the [http://lammps.sandia.gov/doc/create_atoms.html create atoms] command (or read in the data file via the [http://lammps.sandia.gov/doc/read_data.html read data] command) used for creating atoms in the original simulation at the beginning of the rerun. &lt;br /&gt;
* When using rerun for 2D simulations, set dimension to 2 in the input file. For the rerun command, only read in two coordinates from dump file, for example &amp;quot;rerun dump.dat dump x y&amp;quot;.&lt;br /&gt;
&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monte Carlo simulations ==&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=579</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=579"/>
		<updated>2015-09-29T18:43:18Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
module load [module]&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/[lammps_version] -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[module]&amp;lt;/tt&amp;gt;&lt;br /&gt;
Load a module. For mpirun this would be the module mpi.  For full list of available modules run &amp;lt;i&amp;gt;module available&amp;lt;/i&amp;gt; from the command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the lammps executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[lammps_version]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the build of lammps you want to run.  Must be in [directory_name].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Cancel jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
canceljob [job_number]&lt;br /&gt;
canceljob `seq [first_job_number] [last_job_number]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the user specified, where [netid] is your NETID. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w acct=[account number]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the account specified. Grail allocation account number: b1011; CCTSM allocation account number: b1023; ESAM allocation account number: b1020. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘–vvv’ or ‘–v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=578</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=578"/>
		<updated>2015-09-29T18:42:04Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
module load [module]&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/[lammps_version] -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[module]&amp;lt;/tt&amp;gt;&lt;br /&gt;
Load a module. For mpirun this would be the module mpi.  For full list of available modules run &amp;lt;i&amp;gt;module available&amp;lt;/i&amp;gt; from the command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the lammps executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[lammps_version]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the build of lammps you want to run.  Must be in [directory_name].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Cancel jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
canceljob [job_number]&lt;br /&gt;
canceljob `seq [first_job_number] [last_job_number]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the user specified, where [netid] is your NETID. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w acct=[account number]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the account specified. Grail allocation account number: b1011; CCTSM allocation account number: b1023; ESAM allocation account number: b1020. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=577</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=577"/>
		<updated>2015-09-29T18:41:54Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
module load [module]&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/[lammps_version] -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[module]&amp;lt;/tt&amp;gt;&lt;br /&gt;
Load a module. For mpirun this would be the module mpi.  For full list of available modules run &amp;lt;i&amp;gt;module available&amp;lt;/i&amp;gt; from the command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the lammps executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[lammps_version]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the build of lammps you want to run.  Must be in [directory_name].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Cancel jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
canceljob [job_number]&lt;br /&gt;
canceljob `seq [first_job_number] [last_job_number]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the user specified, where [netid] is your NETID. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w acct=[account number]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the account specified. Grail allocation account number: b1011; CCTSM allocation account number:b1023; ESAM allocation account number: b1020. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=576</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=576"/>
		<updated>2015-09-29T18:41:14Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
module load [module]&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/[lammps_version] -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[module]&amp;lt;/tt&amp;gt;&lt;br /&gt;
Load a module. For mpirun this would be the module mpi.  For full list of available modules run &amp;lt;i&amp;gt;module available&amp;lt;/i&amp;gt; from the command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the lammps executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[lammps_version]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the build of lammps you want to run.  Must be in [directory_name].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Cancel jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
canceljob [job_number]&lt;br /&gt;
canceljob `seq [first_job_number] [last_job_number]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to the user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w acct=[account number]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show jobs belonging to the account specified. Grail allocation account number: b1011; CCTSM allocation account number:b1023; ESAM allocation account number: b1020. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Notes_on_using_Microsoft_Word_for_manuscripts&amp;diff=575</id>
		<title>Notes on using Microsoft Word for manuscripts</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Notes_on_using_Microsoft_Word_for_manuscripts&amp;diff=575"/>
		<updated>2015-09-29T16:36:54Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== 1. Notes on using MathType ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;auctex&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Since Mac OS and Window platform use a different graphics format, when transferring documents between these platforms, you should pay attention to [http://www.dessci.com/en/support/mathtype/tsn/TSN43.htm/ these issues] to avoid possible  problems in MathType objects. When opening a previously saved Word file, you may find that MathType objects become non-editable &amp;quot;pictures&amp;quot;. You can find the solution [http://www.dessci.com/en/support/mathtype/tsn/tsn103.htm here].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Notes on using EndNote ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;auctex&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;Change the format of multiple citations.&#039;&#039;&#039; Go to the EndNote toolbar, select &amp;quot;Edit&amp;quot;-&amp;gt; &amp;quot;Output Styles&amp;quot; -&amp;gt; &amp;quot;Edit &#039;[name of the style you&#039;re using]&#039;&amp;quot;. In the left left column of the opened dialog, locate &amp;quot;Citations&amp;quot; and then click on &amp;quot;Templates&amp;quot;. In the &amp;quot;Multiple Citation Separator&amp;quot; option, you can change the format by adding or deleting the space after semicolon.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Notes_on_using_Microsoft_Word_for_manuscripts&amp;diff=496</id>
		<title>Notes on using Microsoft Word for manuscripts</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Notes_on_using_Microsoft_Word_for_manuscripts&amp;diff=496"/>
		<updated>2015-02-11T22:58:08Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: Created page with &amp;quot;=== 1. Notes on using MathType ===  &amp;lt;ol&amp;gt;  &amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;auctex&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Since Mac OS and Window platform use different graphic file format, when transferring documents between ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== 1. Notes on using MathType ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;auctex&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Since Mac OS and Window platform use different graphic file format, when transferring documents between these platforms, you should pay attention to [http://www.dessci.com/en/support/mathtype/tsn/TSN43.htm/ these issues] to avoid possible  problems in MathType objects. When opening a previously saved Word file, you may find MathType objects become non-editable &amp;quot;pictures&amp;quot;. You can find the solution [http://www.dessci.com/en/support/mathtype/tsn/tsn103.htm here] &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=403</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=403"/>
		<updated>2014-08-01T16:26:36Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of lmp2013_mpi file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Cancel jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
canceljob [job_number]&lt;br /&gt;
canceljob `seq [first_job_number] [last_job_number]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=398</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=398"/>
		<updated>2014-07-16T16:08:30Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of lmp2013_mpi file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=397</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=397"/>
		<updated>2014-07-16T16:07:34Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of lmp2013_mpi file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=393</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=393"/>
		<updated>2014-06-27T03:18:31Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  [directory_name]/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[directory_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of lmp2013_mpi file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=329</id>
		<title>Tools</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=329"/>
		<updated>2014-06-17T15:46:39Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* UNIX */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Analysis tools ===&lt;br /&gt;
&lt;br /&gt;
Computer simulations involve the use of a set of analysis tools. Here, we share notes and ideas on programs commonly used in the CSML.&lt;br /&gt;
&lt;br /&gt;
* [[Autocorrelation]]&lt;br /&gt;
* [[Generic Analyzer]]&lt;br /&gt;
* [[Gnuplot]]&lt;br /&gt;
&lt;br /&gt;
=== Document processing ===&lt;br /&gt;
&lt;br /&gt;
Ideally, we prepare manuscripts in [http://en.wikipedia.org/wiki/LaTeX LaTeX].  Occasionally, especially when collaborating with other research groups, it may be necessary to work in Microsoft Word.&lt;br /&gt;
&lt;br /&gt;
* [[Notes on using LaTeX for manuscripts]]&lt;br /&gt;
* [[Notes on using Microsoft Word for manuscripts]]&lt;br /&gt;
&lt;br /&gt;
=== Job submission and scheduling ===&lt;br /&gt;
* [[Notes on Torque]]&lt;br /&gt;
* [[Notes on Maui]]&lt;br /&gt;
&lt;br /&gt;
=== UNIX ===&lt;br /&gt;
* [http://www.linuxproblem.org/art_9.html Password-less login via ssh]&lt;br /&gt;
* [[Compiler notes]]&lt;br /&gt;
* [[Command-line interface on Linux/UNIX]]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=328</id>
		<title>Tools</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=328"/>
		<updated>2014-06-17T15:45:18Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* UNIX */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Analysis tools ===&lt;br /&gt;
&lt;br /&gt;
Computer simulations involve the use of a set of analysis tools. Here, we share notes and ideas on programs commonly used in the CSML.&lt;br /&gt;
&lt;br /&gt;
* [[Autocorrelation]]&lt;br /&gt;
* [[Generic Analyzer]]&lt;br /&gt;
* [[Gnuplot]]&lt;br /&gt;
&lt;br /&gt;
=== Document processing ===&lt;br /&gt;
&lt;br /&gt;
Ideally, we prepare manuscripts in [http://en.wikipedia.org/wiki/LaTeX LaTeX].  Occasionally, especially when collaborating with other research groups, it may be necessary to work in Microsoft Word.&lt;br /&gt;
&lt;br /&gt;
* [[Notes on using LaTeX for manuscripts]]&lt;br /&gt;
* [[Notes on using Microsoft Word for manuscripts]]&lt;br /&gt;
&lt;br /&gt;
=== Job submission and scheduling ===&lt;br /&gt;
* [[Notes on Torque]]&lt;br /&gt;
* [[Notes on Maui]]&lt;br /&gt;
&lt;br /&gt;
=== UNIX ===&lt;br /&gt;
* [http://www.linuxproblem.org/art_9.html/ Password-less login via ssh]&lt;br /&gt;
* [[Compiler notes]]&lt;br /&gt;
* [[Command-line interface on Linux/UNIX]]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=327</id>
		<title>Tools</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=327"/>
		<updated>2014-06-17T15:45:04Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* UNIX */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Analysis tools ===&lt;br /&gt;
&lt;br /&gt;
Computer simulations involve the use of a set of analysis tools. Here, we share notes and ideas on programs commonly used in the CSML.&lt;br /&gt;
&lt;br /&gt;
* [[Autocorrelation]]&lt;br /&gt;
* [[Generic Analyzer]]&lt;br /&gt;
* [[Gnuplot]]&lt;br /&gt;
&lt;br /&gt;
=== Document processing ===&lt;br /&gt;
&lt;br /&gt;
Ideally, we prepare manuscripts in [http://en.wikipedia.org/wiki/LaTeX LaTeX].  Occasionally, especially when collaborating with other research groups, it may be necessary to work in Microsoft Word.&lt;br /&gt;
&lt;br /&gt;
* [[Notes on using LaTeX for manuscripts]]&lt;br /&gt;
* [[Notes on using Microsoft Word for manuscripts]]&lt;br /&gt;
&lt;br /&gt;
=== Job submission and scheduling ===&lt;br /&gt;
* [[Notes on Torque]]&lt;br /&gt;
* [[Notes on Maui]]&lt;br /&gt;
&lt;br /&gt;
=== UNIX ===&lt;br /&gt;
* [[http://www.linuxproblem.org/art_9.html/ Password-less login via ssh]]&lt;br /&gt;
* [[Compiler notes]]&lt;br /&gt;
* [[Command-line interface on Linux/UNIX]]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=326</id>
		<title>Tools</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Tools&amp;diff=326"/>
		<updated>2014-06-17T15:44:49Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* UNIX */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Analysis tools ===&lt;br /&gt;
&lt;br /&gt;
Computer simulations involve the use of a set of analysis tools. Here, we share notes and ideas on programs commonly used in the CSML.&lt;br /&gt;
&lt;br /&gt;
* [[Autocorrelation]]&lt;br /&gt;
* [[Generic Analyzer]]&lt;br /&gt;
* [[Gnuplot]]&lt;br /&gt;
&lt;br /&gt;
=== Document processing ===&lt;br /&gt;
&lt;br /&gt;
Ideally, we prepare manuscripts in [http://en.wikipedia.org/wiki/LaTeX LaTeX].  Occasionally, especially when collaborating with other research groups, it may be necessary to work in Microsoft Word.&lt;br /&gt;
&lt;br /&gt;
* [[Notes on using LaTeX for manuscripts]]&lt;br /&gt;
* [[Notes on using Microsoft Word for manuscripts]]&lt;br /&gt;
&lt;br /&gt;
=== Job submission and scheduling ===&lt;br /&gt;
* [[Notes on Torque]]&lt;br /&gt;
* [[Notes on Maui]]&lt;br /&gt;
&lt;br /&gt;
=== UNIX ===&lt;br /&gt;
* [[[http://www.linuxproblem.org/art_9.html/ Password-less login via ssh]]]&lt;br /&gt;
* [[Compiler notes]]&lt;br /&gt;
* [[Command-line interface on Linux/UNIX]]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=325</id>
		<title>Password-less login via ssh</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=325"/>
		<updated>2014-06-17T15:43:24Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.linuxproblem.org/art_9.html/ Creating Password-less login via ssh]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=324</id>
		<title>Password-less login via ssh</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=324"/>
		<updated>2014-06-17T15:42:36Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.linuxproblem.org/art_9.html / Creating Password-less login via ssh]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=323</id>
		<title>Password-less login via ssh</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Password-less_login_via_ssh&amp;diff=323"/>
		<updated>2014-06-17T15:41:57Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: Created page with &amp;quot;[http://www.linuxproblem.org/art_9.html/ Creating Password-less login via ssh]&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.linuxproblem.org/art_9.html/ Creating Password-less login via ssh]&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=307</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=307"/>
		<updated>2014-06-10T20:28:39Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
checkjob [job_ID] &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command displays detailed information about a submitted job’s status and diagnostic information that can be useful for troubleshooting submission issues. It can also be used to obtain useful information about completed jobs such as the allocated nodes, resources used, and exit codes. NUIT recommends using the flag ‘-vvv’ or ‘-v –v –v’ to gather additional diagnostic information. &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=306</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=306"/>
		<updated>2014-06-10T20:24:58Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -r&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show running jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show idle jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq -w user=[netid]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where [netid] is your NETID. This command will show only jobs belonging to user specified.&lt;br /&gt;
qstat &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=303</id>
		<title>General Usage of Hydra</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=303"/>
		<updated>2014-06-09T22:54:44Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh hydra  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.pbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
&lt;br /&gt;
# ### name of job&lt;br /&gt;
#PBS -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#PBS -m ea&lt;br /&gt;
#PBS -M [email_address]&lt;br /&gt;
&lt;br /&gt;
# ### maximum wall time&lt;br /&gt;
#PBS -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
&lt;br /&gt;
# ### queue&lt;br /&gt;
#PBS -q [queue_name]&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#PBS -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#PBS -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /home/[job_location]&lt;br /&gt;
time /opt/lammps/lmp2013 &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of the job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: fast or default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub job.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show status of all jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -u [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[username]&amp;lt;/tt&amp;gt; is your own username. This command will show the status of your own jobs.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=302</id>
		<title>General Usage of Hydra</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=302"/>
		<updated>2014-06-09T22:54:08Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh hydra  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.pbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
&lt;br /&gt;
# ### name of job&lt;br /&gt;
#PBS -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#PBS -m ea&lt;br /&gt;
#PBS -M [email_address]&lt;br /&gt;
&lt;br /&gt;
# ### maximum wall time&lt;br /&gt;
#PBS -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
&lt;br /&gt;
# ### queue&lt;br /&gt;
#PBS -q [queue_name]&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#PBS -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#PBS -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /home/[job_location]&lt;br /&gt;
time /opt/lammps/lmp2013 &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of the job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: fast or default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub job.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show status of all jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -u [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[username]&amp;lt;/tt&amp;gt; is your own user name. This command will show the status of your own jobs.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=301</id>
		<title>General Usage of Hydra</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=301"/>
		<updated>2014-06-09T22:53:20Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh hydra  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.pbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
&lt;br /&gt;
# ### name of job&lt;br /&gt;
#PBS -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#PBS -m ea&lt;br /&gt;
#PBS -M [email_address]&lt;br /&gt;
&lt;br /&gt;
# ### maximum wall time&lt;br /&gt;
#PBS -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
&lt;br /&gt;
# ### queue&lt;br /&gt;
#PBS -q [queue_name]&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#PBS -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#PBS -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /home/[job_location]&lt;br /&gt;
time /opt/lammps/lmp2013 &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of the job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can submit your jobs on fast nodes or default nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub job.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show status of all jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -u [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[username]&amp;lt;/tt&amp;gt; is your own user name. This command will show the status of your own jobs.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=300</id>
		<title>General Usage of Hydra</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=300"/>
		<updated>2014-06-09T22:50:46Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh hydra  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.pbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
&lt;br /&gt;
# ### name of job&lt;br /&gt;
#PBS -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#PBS -m ea&lt;br /&gt;
#PBS -M [email_address]&lt;br /&gt;
&lt;br /&gt;
# ### maximum wall time&lt;br /&gt;
#PBS -l walltime=30:00:00:00&lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
&lt;br /&gt;
# # ### queue&lt;br /&gt;
###PBS -q [queue]&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#PBS -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#PBS -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /home/[job_location]&lt;br /&gt;
time /opt/lammps/lmp2013 &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of the job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub job.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show status of all jobs.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -u [username]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[username]&amp;lt;/tt&amp;gt; is your own user name. This command will show the status of your own jobs.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=299</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=299"/>
		<updated>2014-06-09T22:45:37Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[name_of_job]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the name of your job that will be showed in the queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=298</id>
		<title>General Usage of Hydra</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Hydra&amp;diff=298"/>
		<updated>2014-06-09T22:44:21Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: Created page with &amp;quot;&amp;lt;ul&amp;gt; &amp;lt;li&amp;gt;Login &amp;lt;pre&amp;gt; ssh hydra   &amp;lt;/pre&amp;gt; &amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Example of job.pbs file &amp;lt;pre&amp;gt;  ### AUTOMATICALLY GENERATED BATCH FILE  # ### name of job #PBS -N [name_]  # ### mail for begi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh hydra  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.pbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
&lt;br /&gt;
# ### name of job&lt;br /&gt;
#PBS -N [name_]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#PBS -m ea&lt;br /&gt;
#PBS -M zonghuiroot@gmail.com&lt;br /&gt;
&lt;br /&gt;
# ### maximum cpu time&lt;br /&gt;
####PBS -l cput=5000:00:00&lt;br /&gt;
#PBS -l walltime=30:00:00:00&lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
&lt;br /&gt;
# # ### queue&lt;br /&gt;
###PBS -q fast&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#PBS -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#PBS -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#PBS -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#PBS -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /home/zonghui/Scratch/Multi_chain/Double_length_DNA_experiment/ppa12.peg5_cr_5_double_ka_4/ppa.cat12.pegcat2.peg5&lt;br /&gt;
#mpirun -np 2 /opt/lammps/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
time /opt/lammps/lmp2013 &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
#/home/wei/Install/lammps/current/src/lmp_suse_linux &amp;lt; input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=297</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=297"/>
		<updated>2014-06-09T22:41:28Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Hydra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Desktop machines ===&lt;br /&gt;
&lt;br /&gt;
All desktop machines run [http://www.opensuse.org OpenSuSE]. [[Installation instructions for OpenSuSE 13.1]].&lt;br /&gt;
&lt;br /&gt;
=== Clusters ===&lt;br /&gt;
&lt;br /&gt;
==== Minotaur ====&lt;br /&gt;
* 38 nodes, each containing two 4-core processors (304 cores total). 8 GB memory per node.&amp;lt;br&amp;gt;Processor type: Intel Xeon E5472, 3.0 GHz.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Hydra ====&lt;br /&gt;
* 60 nodes, each containing two 6-core processors (720 cores total). 12 GB memory per node.&amp;lt;br&amp;gt;8 nodes (queue &amp;quot;fast&amp;quot;, nodes h001-h008) have Intel Xeon X5690 3.47 GHz processors.&amp;lt;br&amp;gt;52 nodes (queue &amp;quot;default&amp;quot;, nodes h009-h060) have Intel Xeon E5645 2.40 GHz processors.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage of Hydra]]&lt;br /&gt;
&lt;br /&gt;
==== Quest ====&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage of Quest]]&lt;br /&gt;
&lt;br /&gt;
=== Disk space, backups, and RAID storage ===&lt;br /&gt;
&lt;br /&gt;
==== Disk space allocations and nightly backups ====&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory located on &#039;&#039;ariadne&#039;&#039;.  This home directory is exported to all desktop machines, so that you see the same home filesystem on each machine.  The drive is protected against hardware failure via a [[http://en.wikipedia.org/wiki/RAID_1#RAID_1 RAID-1]] setup.  Furthermore, each night all new or modified files on /home are written to tape (located in ariadne).  This makes it important not to store temporary data in your home folder, as it would quickly fill up the tape.  Since users tend to forget this, a quota system has been enabled on ariadne, restricting each user to 15 GB. To check how much space you are using log on to ariadne and issue the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, each user has significant additional storage on the scratch partitions. These drives are located in the different desktop machines and protected via RAID-1, but backups are your own responsibility. Note that these partitions are generally only mounted on the desktop machine that contains the corresponding drives. If you need a partition to be exported to a different machine, please ask.&lt;br /&gt;
&lt;br /&gt;
==== Changing the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
# Press eject button on tape drive in ariadne.&lt;br /&gt;
# Take the tape cartridge out of the drive and put it in its box (should be on top of ariadne). Label the box.&lt;br /&gt;
# Insert cleaning tape (on top of ariadne).  It will work for less than a minute and then eject automatically.&lt;br /&gt;
# Put cleaning tape back in box on top of ariadne.&lt;br /&gt;
# Insert new DDS tape.  Leave empty box on top of ariadne.&lt;br /&gt;
# Update settings in /usr/local/lib/backup, namely &#039;&#039;position&#039;&#039; and &#039;&#039;tapenumber&#039;&#039;; update logfile.&lt;br /&gt;
&lt;br /&gt;
==== Recovering data from the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
Log files of all nightly backup tapes are located on ariadne, in /usr/local/lib/backup. For privacy reasons, these logfiles are only accessible to root. Once the proper file to be recovered has been identified, insert the corresponding tape into the drive on ariadne and follow these steps (all to be executed as root):&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;tt&amp;gt;cd /&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(if you change to a different directory, the recovered file will be placed relative to this directory)&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewind&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewind&amp;lt;/tt&amp;gt;)&lt;br /&gt;
# &amp;lt;tt&amp;gt;mtst -f /dev/nst0 fsf &amp;lt;position&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(see the contents file in /usr/local/lib/backup for the position number)&lt;br /&gt;
# &amp;lt;tt&amp;gt;tar xzvf /dev/nst0 &amp;lt;full_file_name_without_leading_slash&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;This step won&#039;t work unless you omit the leading slash; also note that you can specify multiple files, separated by spaces. The &#039;z&#039; option is necessary because all nightly backups are compressed. For wildcards, use --wildcards and escape &#039;*&#039; and &#039;?&#039;. For example: &amp;lt;tt&amp;gt;tar -x --wildcards -zvf /dev/nst0 \*datafiles\*&amp;lt;/tt&amp;gt;&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewoffl&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewoffl&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== Archiving data using the LTO tape drive ====&lt;br /&gt;
&lt;br /&gt;
==== Checking RAID status ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;hydra&amp;quot;&amp;gt;Hydra&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;OS is on software RAID (which spans /dev/sda and /dev/sdb). An overview is obtained via&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /proc/mdstat&amp;lt;/pre&amp;gt;&lt;br /&gt;
Detailed information via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mdadm --detail /dev/mdX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where X = 1, 5, 6, 7.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/home&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/archive&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Minotaur&amp;lt;br&amp;gt;Web interface. Log in as root to head node and use &#039;&#039;opera&#039;&#039;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Ariadne&amp;lt;br&amp;gt;RAID-5 controller with 4 drives.  Status can be checked by interrogating the controller:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the &#039;Device Present&#039; section, it is reported if any drives are critical or have failed, and what the state of the RAID is. More detailed information can also be found via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -LDPDInfo -aAll | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Directly at the beginning (under &#039;Adapter #0&#039;) it should report &#039;State: Optimal&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Desktop machines, except pelops&amp;lt;br&amp;gt;Hardware RAID-1. The RAID status is reported upon reboot of a machine. Press Ctrl-C (when prompted) to enter the configuration utility. From within Linux, use (as root):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpt-status -i 0&lt;br /&gt;
mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The second command only applies to machines with a second set of hard drives (achilles, agamemnon, nestor, poseidon)&amp;lt;br&amp;gt;&lt;br /&gt;
To allow regular users to verify the RAID status, the &amp;lt;tt&amp;gt;mpt-status&amp;lt;/tt&amp;gt; has been added to &amp;lt;tt&amp;gt;sudo&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo mpt-status -i 0&lt;br /&gt;
sudo mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Pelops: Software RAID (for OS and scratch partitions). See [[#hydra|Hydra]].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Printers ===&lt;br /&gt;
&lt;br /&gt;
=== Scanner ===&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=296</id>
		<title>General Usage of Quest</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage_of_Quest&amp;diff=296"/>
		<updated>2014-06-09T22:30:25Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: Created page with &amp;quot;&amp;lt;ul&amp;gt; &amp;lt;li&amp;gt;Login &amp;lt;pre&amp;gt; ssh [netid]@quest.it.northwestern.edu     &amp;lt;/pre&amp;gt; where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=295</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=295"/>
		<updated>2014-06-09T22:30:18Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Quest */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Desktop machines ===&lt;br /&gt;
&lt;br /&gt;
All desktop machines run [http://www.opensuse.org OpenSuSE]. [[Installation instructions for OpenSuSE 13.1]].&lt;br /&gt;
&lt;br /&gt;
=== Clusters ===&lt;br /&gt;
&lt;br /&gt;
==== Minotaur ====&lt;br /&gt;
* 38 nodes, each containing two 4-core processors (304 cores total). 8 GB memory per node.&amp;lt;br&amp;gt;Processor type: Intel Xeon E5472, 3.0 GHz.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Hydra ====&lt;br /&gt;
* 60 nodes, each containing two 6-core processors (720 cores total). 12 GB memory per node.&amp;lt;br&amp;gt;8 nodes (queue &amp;quot;fast&amp;quot;, nodes h001-h008) have Intel Xeon X5690 3.47 GHz processors.&amp;lt;br&amp;gt;52 nodes (queue &amp;quot;default&amp;quot;, nodes h009-h060) have Intel Xeon E5645 2.40 GHz processors.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Quest ====&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage of Quest]]&lt;br /&gt;
&lt;br /&gt;
=== Disk space, backups, and RAID storage ===&lt;br /&gt;
&lt;br /&gt;
==== Disk space allocations and nightly backups ====&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory located on &#039;&#039;ariadne&#039;&#039;.  This home directory is exported to all desktop machines, so that you see the same home filesystem on each machine.  The drive is protected against hardware failure via a [[http://en.wikipedia.org/wiki/RAID_1#RAID_1 RAID-1]] setup.  Furthermore, each night all new or modified files on /home are written to tape (located in ariadne).  This makes it important not to store temporary data in your home folder, as it would quickly fill up the tape.  Since users tend to forget this, a quota system has been enabled on ariadne, restricting each user to 15 GB. To check how much space you are using log on to ariadne and issue the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, each user has significant additional storage on the scratch partitions. These drives are located in the different desktop machines and protected via RAID-1, but backups are your own responsibility. Note that these partitions are generally only mounted on the desktop machine that contains the corresponding drives. If you need a partition to be exported to a different machine, please ask.&lt;br /&gt;
&lt;br /&gt;
==== Changing the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
# Press eject button on tape drive in ariadne.&lt;br /&gt;
# Take the tape cartridge out of the drive and put it in its box (should be on top of ariadne). Label the box.&lt;br /&gt;
# Insert cleaning tape (on top of ariadne).  It will work for less than a minute and then eject automatically.&lt;br /&gt;
# Put cleaning tape back in box on top of ariadne.&lt;br /&gt;
# Insert new DDS tape.  Leave empty box on top of ariadne.&lt;br /&gt;
# Update settings in /usr/local/lib/backup, namely &#039;&#039;position&#039;&#039; and &#039;&#039;tapenumber&#039;&#039;; update logfile.&lt;br /&gt;
&lt;br /&gt;
==== Recovering data from the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
Log files of all nightly backup tapes are located on ariadne, in /usr/local/lib/backup. For privacy reasons, these logfiles are only accessible to root. Once the proper file to be recovered has been identified, insert the corresponding tape into the drive on ariadne and follow these steps (all to be executed as root):&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;tt&amp;gt;cd /&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(if you change to a different directory, the recovered file will be placed relative to this directory)&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewind&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewind&amp;lt;/tt&amp;gt;)&lt;br /&gt;
# &amp;lt;tt&amp;gt;mtst -f /dev/nst0 fsf &amp;lt;position&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(see the contents file in /usr/local/lib/backup for the position number)&lt;br /&gt;
# &amp;lt;tt&amp;gt;tar xzvf /dev/nst0 &amp;lt;full_file_name_without_leading_slash&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;This step won&#039;t work unless you omit the leading slash; also note that you can specify multiple files, separated by spaces. The &#039;z&#039; option is necessary because all nightly backups are compressed. For wildcards, use --wildcards and escape &#039;*&#039; and &#039;?&#039;. For example: &amp;lt;tt&amp;gt;tar -x --wildcards -zvf /dev/nst0 \*datafiles\*&amp;lt;/tt&amp;gt;&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewoffl&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewoffl&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== Archiving data using the LTO tape drive ====&lt;br /&gt;
&lt;br /&gt;
==== Checking RAID status ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;hydra&amp;quot;&amp;gt;Hydra&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;OS is on software RAID (which spans /dev/sda and /dev/sdb). An overview is obtained via&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /proc/mdstat&amp;lt;/pre&amp;gt;&lt;br /&gt;
Detailed information via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mdadm --detail /dev/mdX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where X = 1, 5, 6, 7.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/home&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/archive&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Minotaur&amp;lt;br&amp;gt;Web interface. Log in as root to head node and use &#039;&#039;opera&#039;&#039;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Ariadne&amp;lt;br&amp;gt;RAID-5 controller with 4 drives.  Status can be checked by interrogating the controller:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the &#039;Device Present&#039; section, it is reported if any drives are critical or have failed, and what the state of the RAID is. More detailed information can also be found via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -LDPDInfo -aAll | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Directly at the beginning (under &#039;Adapter #0&#039;) it should report &#039;State: Optimal&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Desktop machines, except pelops&amp;lt;br&amp;gt;Hardware RAID-1. The RAID status is reported upon reboot of a machine. Press Ctrl-C (when prompted) to enter the configuration utility. From within Linux, use (as root):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpt-status -i 0&lt;br /&gt;
mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The second command only applies to machines with a second set of hard drives (achilles, agamemnon, nestor, poseidon)&amp;lt;br&amp;gt;&lt;br /&gt;
To allow regular users to verify the RAID status, the &amp;lt;tt&amp;gt;mpt-status&amp;lt;/tt&amp;gt; has been added to &amp;lt;tt&amp;gt;sudo&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo mpt-status -i 0&lt;br /&gt;
sudo mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Pelops: Software RAID (for OS and scratch partitions). See [[#hydra|Hydra]].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Printers ===&lt;br /&gt;
&lt;br /&gt;
=== Scanner ===&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Notes_on_Torque&amp;diff=294</id>
		<title>Notes on Torque</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Notes_on_Torque&amp;diff=294"/>
		<updated>2014-06-09T22:28:19Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Overview ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General usage ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Special usage notes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Change the total cpu time allotted to a job via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qalter -l cput=[new_cput] [job_id]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[new_cput]&amp;lt;/tt&amp;gt; is the desired amount of cpu time and &amp;lt;tt&amp;gt;[job_id]&amp;lt;/tt&amp;gt; is the job&#039;s ID number. Multiple jobs can be changed simultaneously by using &amp;lt;tt&amp;gt;`seq [min_job_id] [max_job_id]`&amp;lt;/tt&amp;gt; in the place of &amp;lt;tt&amp;gt;job_id&amp;lt;/tt&amp;gt;. This will change all jobs &amp;lt;tt&amp;gt;[min_job_id]&amp;lt;/tt&amp;gt; through &amp;lt;tt&amp;gt;[max_job_id]&amp;lt;/tt&amp;gt;. Lowering the cpu time requirement of a job can decrease its wait time in the queue, as the scheduler is more likely to be able to use if for backfilling.  On the other hand, increasing the cpu time requirement can be used to ensure that a job is able to finish properly (and can be done even while the job is running), but requires root permission.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Move a queued (i.e., waiting) job to a different queue via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qmove [destination] [job_id]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[destination]&amp;lt;/tt&amp;gt; is the new queue (either &#039;fast&#039; or &#039;default&#039; for our system) and &amp;lt;tt&amp;gt;[job_id]&amp;lt;/tt&amp;gt; is the job ID.&lt;br /&gt;
&amp;lt;li&amp;gt;Delete sequence of jobs via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qdel `seq [job_id1] [job_id2]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[job_id1]&amp;lt;/tt&amp;gt; is the first job ID of the sequence of jobs you want to delete and &amp;lt;tt&amp;gt;[job_id2]&amp;lt;/tt&amp;gt; is the last one.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Notes_on_Torque&amp;diff=293</id>
		<title>Notes on Torque</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Notes_on_Torque&amp;diff=293"/>
		<updated>2014-06-09T22:27:57Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Overview ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General usage ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Special usage notes ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Change the total cpu time allotted to a job via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qalter -l cput=[new_cput] [job_id]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[new_cput]&amp;lt;/tt&amp;gt; is the desired amount of cpu time and &amp;lt;tt&amp;gt;[job_id]&amp;lt;/tt&amp;gt; is the job&#039;s ID number. Multiple jobs can be changed simultaneously by using &amp;lt;tt&amp;gt;`seq [min_job_id] [max_job_id]`&amp;lt;/tt&amp;gt; in the place of &amp;lt;tt&amp;gt;job_id&amp;lt;/tt&amp;gt;. This will change all jobs &amp;lt;tt&amp;gt;[min_job_id]&amp;lt;/tt&amp;gt; through &amp;lt;tt&amp;gt;[max_job_id]&amp;lt;/tt&amp;gt;. Lowering the cpu time requirement of a job can decrease its wait time in the queue, as the scheduler is more likely to be able to use if for backfilling.  On the other hand, increasing the cpu time requirement can be used to ensure that a job is able to finish properly (and can be done even while the job is running), but requires root permission.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Move a queued (i.e., waiting) job to a different queue via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qmove [destination] [job_id]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[destination]&amp;lt;/tt&amp;gt; is the new queue (either &#039;fast&#039; or &#039;default&#039; for our system) and &amp;lt;tt&amp;gt;[job_id]&amp;lt;/tt&amp;gt; is the job ID.&lt;br /&gt;
&amp;lt;li&amp;gt;Delete sequence of jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qdel `seq [job_id1] [job_id2]`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[job_id1]&amp;lt;/tt&amp;gt; is the first job ID of the sequence of jobs you want to delete and &amp;lt;tt&amp;gt;[job_id2]&amp;lt;/tt&amp;gt; is the last one.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=292</id>
		<title>General Usage</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=292"/>
		<updated>2014-06-09T22:22:53Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Check job status&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
showq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show all jobs &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
show your own jobs &lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=291</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=291"/>
		<updated>2014-06-09T22:17:34Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Quest */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Desktop machines ===&lt;br /&gt;
&lt;br /&gt;
All desktop machines run [http://www.opensuse.org OpenSuSE]. [[Installation instructions for OpenSuSE 13.1]].&lt;br /&gt;
&lt;br /&gt;
=== Clusters ===&lt;br /&gt;
&lt;br /&gt;
==== Minotaur ====&lt;br /&gt;
* 38 nodes, each containing two 4-core processors (304 cores total). 8 GB memory per node.&amp;lt;br&amp;gt;Processor type: Intel Xeon E5472, 3.0 GHz.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Hydra ====&lt;br /&gt;
* 60 nodes, each containing two 6-core processors (720 cores total). 12 GB memory per node.&amp;lt;br&amp;gt;8 nodes (queue &amp;quot;fast&amp;quot;, nodes h001-h008) have Intel Xeon X5690 3.47 GHz processors.&amp;lt;br&amp;gt;52 nodes (queue &amp;quot;default&amp;quot;, nodes h009-h060) have Intel Xeon E5645 2.40 GHz processors.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Quest ====&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage]]&lt;br /&gt;
&lt;br /&gt;
=== Disk space, backups, and RAID storage ===&lt;br /&gt;
&lt;br /&gt;
==== Disk space allocations and nightly backups ====&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory located on &#039;&#039;ariadne&#039;&#039;.  This home directory is exported to all desktop machines, so that you see the same home filesystem on each machine.  The drive is protected against hardware failure via a [[http://en.wikipedia.org/wiki/RAID_1#RAID_1 RAID-1]] setup.  Furthermore, each night all new or modified files on /home are written to tape (located in ariadne).  This makes it important not to store temporary data in your home folder, as it would quickly fill up the tape.  Since users tend to forget this, a quota system has been enabled on ariadne, restricting each user to 15 GB. To check how much space you are using log on to ariadne and issue the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, each user has significant additional storage on the scratch partitions. These drives are located in the different desktop machines and protected via RAID-1, but backups are your own responsibility. Note that these partitions are generally only mounted on the desktop machine that contains the corresponding drives. If you need a partition to be exported to a different machine, please ask.&lt;br /&gt;
&lt;br /&gt;
==== Changing the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
# Press eject button on tape drive in ariadne.&lt;br /&gt;
# Take the tape cartridge out of the drive and put it in its box (should be on top of ariadne). Label the box.&lt;br /&gt;
# Insert cleaning tape (on top of ariadne).  It will work for less than a minute and then eject automatically.&lt;br /&gt;
# Put cleaning tape back in box on top of ariadne.&lt;br /&gt;
# Insert new DDS tape.  Leave empty box on top of ariadne.&lt;br /&gt;
# Update settings in /usr/local/lib/backup, namely &#039;&#039;position&#039;&#039; and &#039;&#039;tapenumber&#039;&#039;; update logfile.&lt;br /&gt;
&lt;br /&gt;
==== Recovering data from the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
Log files of all nightly backup tapes are located on ariadne, in /usr/local/lib/backup. For privacy reasons, these logfiles are only accessible to root. Once the proper file to be recovered has been identified, insert the corresponding tape into the drive on ariadne and follow these steps (all to be executed as root):&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;tt&amp;gt;cd /&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(if you change to a different directory, the recovered file will be placed relative to this directory)&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewind&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewind&amp;lt;/tt&amp;gt;)&lt;br /&gt;
# &amp;lt;tt&amp;gt;mtst -f /dev/nst0 fsf &amp;lt;position&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(see the contents file in /usr/local/lib/backup for the position number)&lt;br /&gt;
# &amp;lt;tt&amp;gt;tar xzvf /dev/nst0 &amp;lt;full_file_name_without_leading_slash&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;This step won&#039;t work unless you omit the leading slash; also note that you can specify multiple files, separated by spaces. The &#039;z&#039; option is necessary because all nightly backups are compressed. For wildcards, use --wildcards and escape &#039;*&#039; and &#039;?&#039;. For example: &amp;lt;tt&amp;gt;tar -x --wildcards -zvf /dev/nst0 \*datafiles\*&amp;lt;/tt&amp;gt;&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewoffl&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewoffl&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== Archiving data using the LTO tape drive ====&lt;br /&gt;
&lt;br /&gt;
==== Checking RAID status ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;hydra&amp;quot;&amp;gt;Hydra&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;OS is on software RAID (which spans /dev/sda and /dev/sdb). An overview is obtained via&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /proc/mdstat&amp;lt;/pre&amp;gt;&lt;br /&gt;
Detailed information via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mdadm --detail /dev/mdX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where X = 1, 5, 6, 7.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/home&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/archive&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Minotaur&amp;lt;br&amp;gt;Web interface. Log in as root to head node and use &#039;&#039;opera&#039;&#039;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Ariadne&amp;lt;br&amp;gt;RAID-5 controller with 4 drives.  Status can be checked by interrogating the controller:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the &#039;Device Present&#039; section, it is reported if any drives are critical or have failed, and what the state of the RAID is. More detailed information can also be found via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -LDPDInfo -aAll | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Directly at the beginning (under &#039;Adapter #0&#039;) it should report &#039;State: Optimal&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Desktop machines, except pelops&amp;lt;br&amp;gt;Hardware RAID-1. The RAID status is reported upon reboot of a machine. Press Ctrl-C (when prompted) to enter the configuration utility. From within Linux, use (as root):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpt-status -i 0&lt;br /&gt;
mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The second command only applies to machines with a second set of hard drives (achilles, agamemnon, nestor, poseidon)&amp;lt;br&amp;gt;&lt;br /&gt;
To allow regular users to verify the RAID status, the &amp;lt;tt&amp;gt;mpt-status&amp;lt;/tt&amp;gt; has been added to &amp;lt;tt&amp;gt;sudo&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo mpt-status -i 0&lt;br /&gt;
sudo mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Pelops: Software RAID (for OS and scratch partitions). See [[#hydra|Hydra]].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Printers ===&lt;br /&gt;
&lt;br /&gt;
=== Scanner ===&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=290</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=290"/>
		<updated>2014-06-09T22:17:18Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Quest */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Desktop machines ===&lt;br /&gt;
&lt;br /&gt;
All desktop machines run [http://www.opensuse.org OpenSuSE]. [[Installation instructions for OpenSuSE 13.1]].&lt;br /&gt;
&lt;br /&gt;
=== Clusters ===&lt;br /&gt;
&lt;br /&gt;
==== Minotaur ====&lt;br /&gt;
* 38 nodes, each containing two 4-core processors (304 cores total). 8 GB memory per node.&amp;lt;br&amp;gt;Processor type: Intel Xeon E5472, 3.0 GHz.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Hydra ====&lt;br /&gt;
* 60 nodes, each containing two 6-core processors (720 cores total). 12 GB memory per node.&amp;lt;br&amp;gt;8 nodes (queue &amp;quot;fast&amp;quot;, nodes h001-h008) have Intel Xeon X5690 3.47 GHz processors.&amp;lt;br&amp;gt;52 nodes (queue &amp;quot;default&amp;quot;, nodes h009-h060) have Intel Xeon E5645 2.40 GHz processors.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Quest ====&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage of Quest]]&lt;br /&gt;
&lt;br /&gt;
=== Disk space, backups, and RAID storage ===&lt;br /&gt;
&lt;br /&gt;
==== Disk space allocations and nightly backups ====&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory located on &#039;&#039;ariadne&#039;&#039;.  This home directory is exported to all desktop machines, so that you see the same home filesystem on each machine.  The drive is protected against hardware failure via a [[http://en.wikipedia.org/wiki/RAID_1#RAID_1 RAID-1]] setup.  Furthermore, each night all new or modified files on /home are written to tape (located in ariadne).  This makes it important not to store temporary data in your home folder, as it would quickly fill up the tape.  Since users tend to forget this, a quota system has been enabled on ariadne, restricting each user to 15 GB. To check how much space you are using log on to ariadne and issue the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, each user has significant additional storage on the scratch partitions. These drives are located in the different desktop machines and protected via RAID-1, but backups are your own responsibility. Note that these partitions are generally only mounted on the desktop machine that contains the corresponding drives. If you need a partition to be exported to a different machine, please ask.&lt;br /&gt;
&lt;br /&gt;
==== Changing the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
# Press eject button on tape drive in ariadne.&lt;br /&gt;
# Take the tape cartridge out of the drive and put it in its box (should be on top of ariadne). Label the box.&lt;br /&gt;
# Insert cleaning tape (on top of ariadne).  It will work for less than a minute and then eject automatically.&lt;br /&gt;
# Put cleaning tape back in box on top of ariadne.&lt;br /&gt;
# Insert new DDS tape.  Leave empty box on top of ariadne.&lt;br /&gt;
# Update settings in /usr/local/lib/backup, namely &#039;&#039;position&#039;&#039; and &#039;&#039;tapenumber&#039;&#039;; update logfile.&lt;br /&gt;
&lt;br /&gt;
==== Recovering data from the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
Log files of all nightly backup tapes are located on ariadne, in /usr/local/lib/backup. For privacy reasons, these logfiles are only accessible to root. Once the proper file to be recovered has been identified, insert the corresponding tape into the drive on ariadne and follow these steps (all to be executed as root):&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;tt&amp;gt;cd /&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(if you change to a different directory, the recovered file will be placed relative to this directory)&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewind&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewind&amp;lt;/tt&amp;gt;)&lt;br /&gt;
# &amp;lt;tt&amp;gt;mtst -f /dev/nst0 fsf &amp;lt;position&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(see the contents file in /usr/local/lib/backup for the position number)&lt;br /&gt;
# &amp;lt;tt&amp;gt;tar xzvf /dev/nst0 &amp;lt;full_file_name_without_leading_slash&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;This step won&#039;t work unless you omit the leading slash; also note that you can specify multiple files, separated by spaces. The &#039;z&#039; option is necessary because all nightly backups are compressed. For wildcards, use --wildcards and escape &#039;*&#039; and &#039;?&#039;. For example: &amp;lt;tt&amp;gt;tar -x --wildcards -zvf /dev/nst0 \*datafiles\*&amp;lt;/tt&amp;gt;&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewoffl&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewoffl&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== Archiving data using the LTO tape drive ====&lt;br /&gt;
&lt;br /&gt;
==== Checking RAID status ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;hydra&amp;quot;&amp;gt;Hydra&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;OS is on software RAID (which spans /dev/sda and /dev/sdb). An overview is obtained via&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /proc/mdstat&amp;lt;/pre&amp;gt;&lt;br /&gt;
Detailed information via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mdadm --detail /dev/mdX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where X = 1, 5, 6, 7.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/home&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/archive&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Minotaur&amp;lt;br&amp;gt;Web interface. Log in as root to head node and use &#039;&#039;opera&#039;&#039;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Ariadne&amp;lt;br&amp;gt;RAID-5 controller with 4 drives.  Status can be checked by interrogating the controller:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the &#039;Device Present&#039; section, it is reported if any drives are critical or have failed, and what the state of the RAID is. More detailed information can also be found via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -LDPDInfo -aAll | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Directly at the beginning (under &#039;Adapter #0&#039;) it should report &#039;State: Optimal&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Desktop machines, except pelops&amp;lt;br&amp;gt;Hardware RAID-1. The RAID status is reported upon reboot of a machine. Press Ctrl-C (when prompted) to enter the configuration utility. From within Linux, use (as root):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpt-status -i 0&lt;br /&gt;
mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The second command only applies to machines with a second set of hard drives (achilles, agamemnon, nestor, poseidon)&amp;lt;br&amp;gt;&lt;br /&gt;
To allow regular users to verify the RAID status, the &amp;lt;tt&amp;gt;mpt-status&amp;lt;/tt&amp;gt; has been added to &amp;lt;tt&amp;gt;sudo&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo mpt-status -i 0&lt;br /&gt;
sudo mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Pelops: Software RAID (for OS and scratch partitions). See [[#hydra|Hydra]].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Printers ===&lt;br /&gt;
&lt;br /&gt;
=== Scanner ===&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=289</id>
		<title>General Usage</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=289"/>
		<updated>2014-06-09T22:15:57Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q [queue_name]&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;[queue_name]&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are two options for queue name: collab or collab-preempt. Both of them have startup priority of 5000. Collab has maximum cores of 262 and maximum walltime of 7 days. There is no resource restrictions for collab-preempt, but note that queues ending in ‘-preempt’ contain jobs that can be interrupted and re-queued by jobs from a higher priority queue.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[dd:hh:mm:ss]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the maximum allowed running time for your job. dd: days; hh: hours; mm: minutes; ss: seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[email_address]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the email address you used to receive the system notice when job begins, aborted or ended.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[job_location]&amp;lt;/tt&amp;gt;&lt;br /&gt;
This is the address of the folder where your input file is located.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Submit jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msb job.mbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=288</id>
		<title>General Usage</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=288"/>
		<updated>2014-06-09T21:55:35Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Example of job.mbs file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ### AUTOMATICALLY GENERATED BATCH FILE&lt;br /&gt;
#MOAB -q collab&lt;br /&gt;
#MOAB -l advres=grail&lt;br /&gt;
#MOAB -A b1011&lt;br /&gt;
#MOAB -l walltime=[dd:hh:mm:ss]&lt;br /&gt;
&lt;br /&gt;
# ###name of job&lt;br /&gt;
#MOAB -N [name_of_job]&lt;br /&gt;
&lt;br /&gt;
# ### mail for begin/end/abort&lt;br /&gt;
#MOAB -m ea&lt;br /&gt;
#MOAB -M [email_address]                                                                                                      &lt;br /&gt;
&lt;br /&gt;
# ### number of nodes and processors per node&lt;br /&gt;
#MOAB -l nodes=2:ppn=6&lt;br /&gt;
&lt;br /&gt;
# ### indicates that job should not rerun if it fails&lt;br /&gt;
#MOAB -r n&lt;br /&gt;
&lt;br /&gt;
# ### stdout and stderr merged as stderr&lt;br /&gt;
#MOAB -j eo&lt;br /&gt;
&lt;br /&gt;
# ### write stderr to file&lt;br /&gt;
#MOAB -e log.err&lt;br /&gt;
&lt;br /&gt;
# ### the shell that interprets the job script&lt;br /&gt;
#MOAB -S /bin/bash&lt;br /&gt;
&lt;br /&gt;
cd /projects/b1011/luijten-group/[job_location]&lt;br /&gt;
time mpirun -np 12  ~/lammps-30Aug12_standard/src/lmp2013_mpi -in input.dat&lt;br /&gt;
&lt;br /&gt;
if [ $? -eq 0 ] ; then&lt;br /&gt;
touch COMPLETED&lt;br /&gt;
fi   &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=287</id>
		<title>General Usage</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=287"/>
		<updated>2014-06-09T21:36:38Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=286</id>
		<title>General Usage</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=General_Usage&amp;diff=286"/>
		<updated>2014-06-09T21:33:40Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: Created page with &amp;quot;&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Login &amp;lt;pre&amp;gt; ssh [netid]@quest.it.northwestern.edu     &amp;lt;/pre&amp;gt; where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh [netid]@quest.it.northwestern.edu    &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;tt&amp;gt;[netid]&amp;lt;/tt&amp;gt; is your NETID. The first time you login, it will ask you to enter file in which to save the key and then to enter passphrase twice. Just press &amp;quot;enter&amp;quot; for these three questions and you should be able to login successfully.&lt;br /&gt;
&lt;br /&gt;
Our group folder is located in /projects/b1011/luijten-group&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
	<entry>
		<id>https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=285</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://csml-wiki.northwestern.edu/index.php?title=Hardware&amp;diff=285"/>
		<updated>2014-06-09T21:22:34Z</updated>

		<summary type="html">&lt;p&gt;Wzhwei: /* Quest */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Desktop machines ===&lt;br /&gt;
&lt;br /&gt;
All desktop machines run [http://www.opensuse.org OpenSuSE]. [[Installation instructions for OpenSuSE 13.1]].&lt;br /&gt;
&lt;br /&gt;
=== Clusters ===&lt;br /&gt;
&lt;br /&gt;
==== Minotaur ====&lt;br /&gt;
* 38 nodes, each containing two 4-core processors (304 cores total). 8 GB memory per node.&amp;lt;br&amp;gt;Processor type: Intel Xeon E5472, 3.0 GHz.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Hydra ====&lt;br /&gt;
* 60 nodes, each containing two 6-core processors (720 cores total). 12 GB memory per node.&amp;lt;br&amp;gt;8 nodes (queue &amp;quot;fast&amp;quot;, nodes h001-h008) have Intel Xeon X5690 3.47 GHz processors.&amp;lt;br&amp;gt;52 nodes (queue &amp;quot;default&amp;quot;, nodes h009-h060) have Intel Xeon E5645 2.40 GHz processors.&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
&lt;br /&gt;
==== Quest ====&lt;br /&gt;
* Jobs are scheduled via [http://www.adaptivecomputing.com/products/open-source/torque/ Torque]/Maui.  [[Notes on Torque]].&lt;br /&gt;
* [[General Usage]]&lt;br /&gt;
&lt;br /&gt;
=== Disk space, backups, and RAID storage ===&lt;br /&gt;
&lt;br /&gt;
==== Disk space allocations and nightly backups ====&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory located on &#039;&#039;ariadne&#039;&#039;.  This home directory is exported to all desktop machines, so that you see the same home filesystem on each machine.  The drive is protected against hardware failure via a [[http://en.wikipedia.org/wiki/RAID_1#RAID_1 RAID-1]] setup.  Furthermore, each night all new or modified files on /home are written to tape (located in ariadne).  This makes it important not to store temporary data in your home folder, as it would quickly fill up the tape.  Since users tend to forget this, a quota system has been enabled on ariadne, restricting each user to 15 GB. To check how much space you are using log on to ariadne and issue the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, each user has significant additional storage on the scratch partitions. These drives are located in the different desktop machines and protected via RAID-1, but backups are your own responsibility. Note that these partitions are generally only mounted on the desktop machine that contains the corresponding drives. If you need a partition to be exported to a different machine, please ask.&lt;br /&gt;
&lt;br /&gt;
==== Changing the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
# Press eject button on tape drive in ariadne.&lt;br /&gt;
# Take the tape cartridge out of the drive and put it in its box (should be on top of ariadne). Label the box.&lt;br /&gt;
# Insert cleaning tape (on top of ariadne).  It will work for less than a minute and then eject automatically.&lt;br /&gt;
# Put cleaning tape back in box on top of ariadne.&lt;br /&gt;
# Insert new DDS tape.  Leave empty box on top of ariadne.&lt;br /&gt;
# Update settings in /usr/local/lib/backup, namely &#039;&#039;position&#039;&#039; and &#039;&#039;tapenumber&#039;&#039;; update logfile.&lt;br /&gt;
&lt;br /&gt;
==== Recovering data from the nightly backup tape ====&lt;br /&gt;
&lt;br /&gt;
Log files of all nightly backup tapes are located on ariadne, in /usr/local/lib/backup. For privacy reasons, these logfiles are only accessible to root. Once the proper file to be recovered has been identified, insert the corresponding tape into the drive on ariadne and follow these steps (all to be executed as root):&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;tt&amp;gt;cd /&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(if you change to a different directory, the recovered file will be placed relative to this directory)&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewind&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewind&amp;lt;/tt&amp;gt;)&lt;br /&gt;
# &amp;lt;tt&amp;gt;mtst -f /dev/nst0 fsf &amp;lt;position&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(see the contents file in /usr/local/lib/backup for the position number)&lt;br /&gt;
# &amp;lt;tt&amp;gt;tar xzvf /dev/nst0 &amp;lt;full_file_name_without_leading_slash&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;This step won&#039;t work unless you omit the leading slash; also note that you can specify multiple files, separated by spaces. The &#039;z&#039; option is necessary because all nightly backups are compressed. For wildcards, use --wildcards and escape &#039;*&#039; and &#039;?&#039;. For example: &amp;lt;tt&amp;gt;tar -x --wildcards -zvf /dev/nst0 \*datafiles\*&amp;lt;/tt&amp;gt;&lt;br /&gt;
# &amp;lt;tt&amp;gt;/usr/local/bin/tape-rewoffl&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;(or &amp;lt;tt&amp;gt;mtst -f /dev/nst0 rewoffl&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== Archiving data using the LTO tape drive ====&lt;br /&gt;
&lt;br /&gt;
==== Checking RAID status ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;hydra&amp;quot;&amp;gt;Hydra&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;OS is on software RAID (which spans /dev/sda and /dev/sdb). An overview is obtained via&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /proc/mdstat&amp;lt;/pre&amp;gt;&lt;br /&gt;
Detailed information via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mdadm --detail /dev/mdX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where X = 1, 5, 6, 7.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/home&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;/archive&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Minotaur&amp;lt;br&amp;gt;Web interface. Log in as root to head node and use &#039;&#039;opera&#039;&#039;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Ariadne&amp;lt;br&amp;gt;RAID-5 controller with 4 drives.  Status can be checked by interrogating the controller:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the &#039;Device Present&#039; section, it is reported if any drives are critical or have failed, and what the state of the RAID is. More detailed information can also be found via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -LDPDInfo -aAll | less&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Directly at the beginning (under &#039;Adapter #0&#039;) it should report &#039;State: Optimal&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Desktop machines, except pelops&amp;lt;br&amp;gt;Hardware RAID-1. The RAID status is reported upon reboot of a machine. Press Ctrl-C (when prompted) to enter the configuration utility. From within Linux, use (as root):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpt-status -i 0&lt;br /&gt;
mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The second command only applies to machines with a second set of hard drives (achilles, agamemnon, nestor, poseidon)&amp;lt;br&amp;gt;&lt;br /&gt;
To allow regular users to verify the RAID status, the &amp;lt;tt&amp;gt;mpt-status&amp;lt;/tt&amp;gt; has been added to &amp;lt;tt&amp;gt;sudo&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo mpt-status -i 0&lt;br /&gt;
sudo mpt-status -i 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Pelops: Software RAID (for OS and scratch partitions). See [[#hydra|Hydra]].&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Printers ===&lt;br /&gt;
&lt;br /&gt;
=== Scanner ===&lt;/div&gt;</summary>
		<author><name>Wzhwei</name></author>
	</entry>
</feed>