Ubuntu 和 Mac 上的 Linux 有什么不同?抱歉,如果这是一个基本问题...我是新手。我正在尝试运行一个 bash 脚本,该脚本会从一些 FASTQ 文件创建一个 .sh 脚本。这在 Mac OS 的终端中有效。我试图在我的 Windows 笔记本电脑上运行它,但它忽略了我对 #s 的转义,只是说找不到几个命令。我尝试使用dos2unix
并仔细检查过,cat -A file.sh
但没有帮助。
我尝试运行的代码将所有 fastq 文件放在一个文件夹中,并使用它们的文件名创建一个 .sh 文件用于 SLURM 作业提交(这是我大学的计算机集群所需要的,我需要制作 100 多个作业脚本)。因此,Mac OS 版本如下:
for FILE in *fastq; #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
\#\!/bin/bash \
\\n\#SBATCH --partition=nonpre \# Partition \(job queue\) \
\\n\#SBATCH --requeue \# Return job to the queue if preempted \
\\n\#SBATCH --job-name=samples \# Assign a short name to your job \
\\n\#SBATCH --nodes=1 \# Number of nodes you require \
\\n\#SBATCH --ntasks=1 \# Total \# of tasks across all nodes \
\\n\#SBATCH --cpus-per-task=64 \# Cores per task \(\>1 if multithread tasks\) \
\\n\#SBATCH --mem=180000 \# Real memory \(RAM\) required \(MB\) \
\\n\#SBATCH --time=72:00:00 \# Total run time limit \(HH:MM:SS\) \
\\n\#SBATCH --output=slurm.%N.${FILE}.out \# STDOUT output file \
\\n\#SBATCH --error=slurm.%N.${FILE}.err \# STDERR output file \(optional\) \
\\n \
\\n\#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \
\\n \
\\nsacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future submissions \
> ${FILE}.sh;
done
在 Windows 上运行此程序时,我得到:
Slurm_Generator.sh: line 13: #!/bin/bash: No such file or directory
Slurm_Generator.sh: line 14: \n#SBATCH: command not found
Slurm_Generator.sh: line 16: \n#SBATCH: command not found
Slurm_Generator.sh: line 18: \n#SBATCH: command not found
Slurm_Generator.sh: line 20: \n#SBATCH: command not found
Slurm_Generator.sh: line 22: \n#SBATCH: command not found
Slurm_Generator.sh: line 24: \n#SBATCH: command not found
Slurm_Generator.sh: line 26: \n#SBATCH: command not found
Slurm_Generator.sh: line 28: \n#SBATCH: command not found
Slurm_Generator.sh: line 30: \n#SBATCH: command not found
Slurm_Generator.sh: line 32: \n#SBATCH: command not found
Slurm_Generator.sh: line 35: \n: command not found
Slurm_Generator.sh: line 36: \n#ADD: command not found
Slurm_Generator.sh: line 39: \n: command not found
Slurm_Generator.sh: line 40: \nsacct: command not found
任何帮助都将不胜感激,并解释一下 Windows 上的 Ubuntu 与 Mac 上的终端之间的区别。我尝试研究过这个问题,但我总是发现建议的代码没有任何解释,或者这不是我的问题。谢谢!
编辑:我尝试运行chmod +x script.sh
,但会出现上述错误。我运行“echo”是否错误?即使运行:“for FILE in *fastq; do echo -e hello; done”也说Command 'hello' not found
编辑:运行“bash file.sh”会产生以下结果:
bash file.sh
会产生以下结果(对于我的目录中的每个 5 个 .fastq 文件):
Slurm_Generator.sh: line 8: #!/bin/bash: No such file or directory
Slurm_Generator.sh: line 9: \n#SBATCH: command not found
Slurm_Generator.sh: line 11: \n#SBATCH: command not found
Slurm_Generator.sh: line 13: \n#SBATCH: command not found
Slurm_Generator.sh: line 15: \n#SBATCH: command not found
Slurm_Generator.sh: line 17: \n#SBATCH: command not found
Slurm_Generator.sh: line 19: \n#SBATCH: command not found
Slurm_Generator.sh: line 21: \n#SBATCH: command not found
Slurm_Generator.sh: line 23: \n#SBATCH: command not found
Slurm_Generator.sh: line 25: \n#SBATCH: command not found
Slurm_Generator.sh: line 27: \n#SBATCH: command not found
Slurm_Generator.sh: line 30: \n: command not found
Slurm_Generator.sh: line 31: \n#ADD: command not found
Slurm_Generator.sh: line 34: \n: command not found
Slurm_Generator.sh: line 35: \nsacct: command not found
如果我运行,cat -A file.sh
我会$
在每行末尾看到一个。即使我去掉这些,我也会得到与上面相同的结果。运行结果ls -al script.sh
为:-rwxrwxrwx 1 cerberus cerberus 1209 Jun 16 00:17 Slurm_Generator.sh
编辑:我将脚本更改为:
#! /bin/bash
for FILE in *fastq; #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
"
#\!/bin/bash
#SBATCH --partition=nonpre # Partition (job queue)
#SBATCH --requeue # Return job to the queue if preempted
#SBATCH --job-name=samples # Assign a short name to your job
#SBATCH --nodes=1 # Number of nodes you require
#SBATCH --ntasks=1 # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000 # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00 # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err # STDERR output file (optional)
#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \
sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future subm$" \
> ${FILE}.sh;
done
我的新输出如下(好多了):
#\!/bin/bash
#SBATCH --partition=nonpre # Partition (job queue)
#SBATCH --requeue # Return job to the queue if preempted
#SBATCH --job-name=samples # Assign a short name to your job
#SBATCH --nodes=1 # Number of nodes you require
#SBATCH --ntasks=1 # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000 # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00 # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.Sample1.fastq.out # STDOUT output file
#SBATCH --error=slurm.%N.Sample1.fastq.er # STDERR output file (optional)
#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION
sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j $SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future submissions
我遇到的唯一问题是,虽然它会为每个 .fastq 文件打印出这些信息(这正是我想要的),但它写出的最终 .sh 文件是空白的。所以它无法识别> ${File}.sh
脚本的这一部分。
谢谢大家!
答案1
使用 heredoc 代替:
for FILE in *fastq; #change file type when needed (e.g., fasta, fastq, fastq.gz)
do
cat <<-EOF > ${FILE}.sh
#!/bin/bash
#SBATCH --partition=nonpre # Partition (job queue)
#SBATCH --requeue # Return job to the queue if preempted
#SBATCH --job-name=samples # Assign a short name to your job
#SBATCH --nodes=1 # Number of nodes you require
#SBATCH --ntasks=1 # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000 # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00 # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err # STDERR output file (optional)
#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION
nsacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID #this will get job run stats from SLURM; use these to help designate memory of future submissions
EOF
done
答案2
我意识到我必须\
在 之前添加一个> ${File}.sh
。所以答案是:
#! /bin/bash
for FILE in *fastq; #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
"
#\!/bin/bash
#SBATCH --partition=nonpre # Partition (job queue)
#SBATCH --requeue # Return job to the queue if preempted
#SBATCH --job-name=samples # Assign a short name to your job
#SBATCH --nodes=1 # Number of nodes you require
#SBATCH --ntasks=1 # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000 # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00 # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err # STDERR output file (optional)
#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \
sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future subm$" \
> ${FILE}.sh;
done