Skip to content
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 29 additions & 13 deletions docs/Software/Available_Applications/ANSYS.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Below is an example of this from a fluent script.
module load ANSYS/{{app.default}}
Comment thread
CallumWalley marked this conversation as resolved.

JOURNAL_FILE=fluent_${SLURM_JOB_ID}.in
cat ${JOURNAL_FILE}
cat << EOF > ${JOURNAL_FILE}
/file/read-case-data testCase${SLURM_ARRAY_TASK_ID}.cas
/solve/dual-time-iterate 10
/file/write-case-data testOut${SLURM_ARRAY_TASK_ID}.cas
Expand Down Expand Up @@ -258,7 +258,7 @@ n24-31 wbn056 8/72 Linux-64 71521-71528 Intel(R) Xeon(R) E5-2695 v4
### Checkpointing

!!! warning "Checkpointing"
We strongly the use of [checkpointing](../../Batch_Computing/Job_Checkpointing.md) for any job running for more than a day.
We recommend [checkpointing](../../Batch_Computing/Job_Checkpointing.md) for any job running for more than a day.

It is best practice when running long jobs to enable autosaves.

Expand All @@ -268,8 +268,6 @@ It is best practice when running long jobs to enable autosaves.

Where `500` is the number of iterations to run before creating a save.

In order to save disk space you may also want to include the line

### Interrupting

Including the following code at the top of your journal file will allow
Expand Down Expand Up @@ -405,7 +403,7 @@ solution specify as relative path, or unload compiled lib before saving

module load ANSYS/{{ applications.ANSYS.default }}
input="/share/test/ansys/mechanical/structural.dat"
cfx5solve -batch -def "${input} -part ${SLURM_NTASKS}
cfx5solve -batch -def "${input}" -part ${SLURM_NTASKS}
Comment thread
CallumWalley marked this conversation as resolved.
```

!!! tip
Expand Down Expand Up @@ -446,7 +444,7 @@ xvfb-run cfx5post input.cse
module load ANSYS/{{ applications.ANSYS.default }}
Comment thread
CallumWalley marked this conversation as resolved.

input=${ANSYS_ROOT}/ansys/data/verif/vm263.dat
mapdl -b -i "${input}
mapdl -b -i "${input}"
```

=== "Shared Memory"
Expand Down Expand Up @@ -517,23 +515,41 @@ xvfb-run cfx5post input.cse

## LS-DYNA

### Fluid-Structure Example
LS-DYNA specialises in highly non-linear, transient dynamic finite element analysis.

### Command line options

| Flag | Purpose | Example |
| ------- | ------------------------------------------ | ----------------------------- |
| -i | The input file argument | `-i "MyInput.k"` |
| NCPUS | SMP cores | `ncpus=-$SLURM_CPUS_PER_TASK` |
Comment thread
CallumWalley marked this conversation as resolved.
Outdated
| MEMORY | How much memory to assign to the head node | `MEMORY=2G` |
| MEMORY2 | How much memory to subsiquent nodes | `MEMORY2=2G` |
Comment thread
CallumWalley marked this conversation as resolved.

Input files are typically LS-DYNA keyword decks such as `.k` files.

### Shared Memory Example

``` sl
#!/bin/bash -e

#SBATCH --job-name LS-DYNA
#SBATCH --account nesi99991 # Project Account
#SBATCH --time 01:00:00 # Walltime
#SBATCH --nodes 1 # (OPTIONAL) Limit to n nodes
#SBATCH --ntasks 16 # Number of CPUs to use
#SBATCH --mem-per-cpu 512MB # Memory per cpu
#SBATCH --cpus-per-task 16 # Number of CPUs to use
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

I see spaces where there should be tabs in yer Slurm header. The guide (line 402) is quite clear about this. Are ye blind or just tryin' to annoy me?

Suggested change
#SBATCH --cpus-per-task 16 # Number of CPUs to use
#SBATCH --cpus-per-task 16 # Number of CPUs to use
References
  1. Use tab for your Slurm header delimiter. (link)

#SBATCH --mem-per-cpu 1G # Memory per cpu

module load ANSYS/{{ applications.ANSYS.default }}
Comment thread
CallumWalley marked this conversation as resolved.
input=3cars_shell2_150ms.k
lsdyna -dis -np $SLURM_NTASKS i="$input" memory=$(($SLURM_MEM_PER_CPU/8))M
lsdyna i=myinput.k NCPUS=$SLURM_CPUS_PER_TASK MEMORY2=1G
```

!!! tip
- Keep large transient LS-DYNA output in larger
storage such as `nobackup`, not your home directory.
- Use restart/[checkpointing](../../Batch_Computing/Job_Checkpointing.md) workflows for long runs so work can continue across multiple scheduled jobs.
- Avoid writing frequent output unless needed, as excessive I/O can reduce performance at scale.
- Adding a `-` in front of your requested number of CPUs, e.g. `ncpu=-64` will force tasks to execute in a deterministic way, decreasing performance but ensuring repeatability.
Comment thread
CallumWalley marked this conversation as resolved.

## FENSAP-ICE

FENSAP-ICE is a fully integrated ice-accretion and aerodynamics
Expand Down Expand Up @@ -613,7 +629,7 @@ Progress can be tracked through the GUI as usual.

## ANSYS-Electromagnetic

ANSYS-EM jobs can be submitted through a slurm script or by
ANSYS-EM jobs can be submitted through a slurm script or by
[interactive session](../../Interactive_Computing/Slurm_Interactive_Sessions.md).

### RSM
Expand Down
Loading