BedpostX is the second step in the DTI processing pipeline that sets the stage for probabilistic tractography.
BedpostX.py
, which uses BedpostX_TEMPLATE.sh
as… a template!BedpostX.sh
Intended for a single run (for multiple subjects, use BedpostX.py and BedpostX_TEMPLATE, see below)
#!/bin/sh
#-------------BedpostX---------------------------------------
# This script is intended to perform BedpostX on DTI data, after
# DTI.py has been run. The script first sets up the required input
# files, and then puts results in a folder called "BedpostX" within the DTI directory
# There must be a data_corrected.nii file under Subject/DTI_FOLDER/DTI, and a .bvec
# and .bval file under Subject/DTI_FOLDER
# Output goes under Analysis/FACES/DTI (do we want it here?)
# ---------WHEN DO I RUN THIS?------------
# In the pipeline, you should first run the data through QA to eliminate any bad apples,
# then convert functional and anatomicals to nifti, and then BET with McFlirt before FEAT
#
# --------WHAT DO I NEED TO CHANGE?------------
# the email to direct the successful run of the script
# check if you want a different OUTDIR path (if not in folder 3, etc)
# the BET value
#------------SUBMISSION ON COMMAND LINE---------------
# [abc1@head ~]$ qsub -v EXPERIMENT=Dummy.01 BedpostX.sh 10 0405174398
# DTIfolder Subject
# --- BEGIN GLOBAL DIRECTIVE --
#$ -S /bin/sh
#$ -o $HOME/$JOB_NAME.$JOB_ID.out
#$ -e $HOME/$JOB_NAME.$JOB_ID.out
#$ -m ea
# -- END GLOBAL DIRECTIVE --
# -- BEGIN PRE-USER --
#Name of experiment whose data you want to access
EXPERIMENT=${EXPERIMENT:?"Experiment not provided"}
source /etc/biac_sge.sh
EXPERIMENT=`biacmount $EXPERIMENT`
EXPERIMENT=${EXPERIMENT:?"Returned NULL Experiment"}
if [ $EXPERIMENT = "ERROR" ]
then
exit 32
else
#Timestamp
echo "----JOB [$JOB_NAME.$JOB_ID] START [`date`] on HOST [$HOSTNAME]----"
# -- END PRE-USER --
# **********************************************************
# -- BEGIN USER DIRECTIVE --
# Send notifications to the following address
#$ -M user@email.com
# -- END USER DIRECTIVE --
# -- BEGIN USER SCRIPT --
# BedpostX requires the following input files, located in the same input directory:
#
# nodif_brain_mask.nii ----- The brain mask created from the corrected data
# data.nii ----- A 4D series of data volumes. This will include diffusion-weighted volumes and volume(s) with no diffusion weighting.
# bvals ----- A text file w/ list of gradient directions applied
# during diffusion weighted volumes - created in dicom-->nifti conversion
# bvecs ----- A text file containing a list of gradient directions applied during diffusion weighted volumes (also
# is created during dicom --> nifti conversion
# Here are our variables from command line:
DTI_FOLDER=$1 # This is the name of the DTI directory under Data/$SUBJECT
SUBJECT=$2
# Set locations of files needed for BedpostX
DTI_DIR=$EXPERIMENT/Data/$SUBJECT/$DTI_FOLDER
BVEC=$DTI_DIR/.bvec
BVAL=$DTI_DIR/.bval
DATA=$DTI_DIR/DTI/data_corrected*
MASK=$DTI_DIR/BET/nodif_brain_mask.*
# First we must create the input directory
cd $DTI_DIR # Go to the DTI directory
mkdir -p $SUBJECT # Make the BedpostX input directory
# Now we copy the files we need into that directory
cp $BVEC -t $DTI_DIR/$SUBJECT
cp $BVAL -t $DTI_DIR/$SUBJECT
cp $MASK -t $DTI_DIR/$SUBJECT
cd $DTI_DIR/DTI
cp $DATA -t $DTI_DIR/$SUBJECT
# Now we rename these files to the correct names
cd $DTI_DIR/$SUBJECT
rm data_corrected.ecclog
mv .bvec bvecs
mv .bval bvals
DATA=$DTI_DIR/$SUBJECT/data_corrected.*
mv $DATA data.nii.gz
cd $DTI_DIR
# check that the input directory has all the correct files
bedpostx_datacheck $SUBJECT
# run bedpostX
bedpostx $SUBJECT -n "2" -w "1" -b "1000"
# -n is number of fibers, -w is weight, and -b is the burnin, or number of iterations before starting the sampling
# move BedpostX directory to proper location
mv $SUBJECT.bedpostX $EXPERIMENT/Analysis/DTI/BedpostX
# -- END USER SCRIPT -- #
# **********************************************************
# -- BEGIN POST-USER --
echo "----JOB [$JOB_NAME.$JOB_ID] STOP [`date`]----"
OUTDIR=${OUTDIR:-$EXPERIMENT/Data/$SUBJECT/$DTI_DIR/$SUBJECT}
mv $HOME/$JOB_NAME.$JOB_ID.out $OUTDIR/$JOB_NAME.$JOB_ID.out
RETURNCODE=${RETURNCODE:-0}
exit $RETURNCODE
fi
# -- END POST USER--
Intended for running BedpostX on multiple subjects using BedpostX_TEMPLATE as a template. PLEASE NOT that this python script is set up for an old cluster at Duke (Einstein) who is no longer in existence!
#!/usr/bin/env python
import sys,os,time,re,datetime,smtplib
#------BedpostX.py---------------------------
#
# This script is used for performing BedpostX on a series of subjects. BedpostX is
# performed after DTI.py is run, which includes Eddy Correction, mask creation,
# and DTIfit. This script works with BedpostX_TEMPLATE.sh
#
#########user section#########################
#user specific constants
username = "abc1" #your cluster login name (use what shows up in qstatall)
useremail = "user@email.com" #email to send job notices to
template_f = file("BedpostX_TEMPLATE.sh") #job template location (on head node)
experiment = "NAME.01" #experiment name for qsub
nodes = 24 #number of nodes on cluster
maintain_n_jobs = 25 #leave one in q to keep them moving through
min_jobs = 5 #minimum number of jobs to keep in q even when crowded
n_fake_jobs = 10 #during business hours, pretend there are extra jobs to try and leave a few spots open
sleep_time = 20 #pause time (sec) between job count checks
max_run_time = 720 #maximum time any job is allowed to run in minutes
max_run_hours=24 #maximum number of hours submission script can run
warning_time=18 #send out a warning after this many hours informing you that the deamon is still running
#make job files these are the lists to be traversed
#all iterated items must be in "[ ]" separated by commas.
subnums = ["123","456","789"] #should be entered in quotes, separated by commas, to be used as strings
runs = [1] #[ run01 ] range cuts the last number off any single runs should still be in [ ] or can be runs=range(1,2)
dtifolder = "11" #the folder containing the dti nifti, bvec, and bvals files
####!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
###############################################
def daemonize(stdin='/dev/null',stdout='/dev/null',stderr='/dev/null'):
try:
#try first fork
pid=os.fork()
if pid>0:
sys.exit(0)
except OSError, e:
sys.stderr.write("for #1 failed: (%d) %sn" % (e.errno,e.strerror))
sys.exit(1)
os.chdir("/")
os.umask(0)
os.setsid()
try:
#try second fork
pid=os.fork()
if pid>0:
sys.exit(0)
except OSError, e:
sys.stderr.write("for #2 failed: (%d) %sn" % (e.errno, e.strerror))
sys.exit(1)
for f in sys.stdout, sys.stderr: f.flush()
si=file(stdin,'r')
so=file(stdout,'a+')
se=file(stderr,'a+',0)
os.dup2(si.fileno(),sys.stdin.fileno())
os.dup2(so.fileno(),sys.stdout.fileno())
os.dup2(se.fileno(),sys.stderr.fileno())
start_dir = os.getcwd()
daemonize('/dev/null',os.path.join(start_dir,'daemon.log'),os.path.join(start_dir,'daemon.log'))
sys.stdout.close()
os.chdir(start_dir)
temp=time.localtime()
hour,minute,second=temp[3],temp[4],temp[5]
prev_hr=temp[3]
t0=str(hour)+':'+str(minute)+':'+str(second)
log_name=os.path.join(start_dir,'daemon.log')
log=file(log_name,'w')
log.write('Daemon started at %s with pid %dn' %(t0,os.getpid()))
log.write('To kill this process type "kill %s" at the head node command linen' % os.getpid())
log.close()
t0=time.time()
master_clock=0
#build allowed timedelta
kill_time_limit = datetime.timedelta(minutes=max_run_time)
def _check_jobs(username, kill_time_limit, n_fake_jobs):
#careful, looks like all vars are global
#see how many jobs we have in
#set number of jobs to maintain based on time of day.
cur_time = datetime.datetime.now() #get current time #time.localtime() #get current time
if (cur_time.weekday > 4) | (cur_time.hour < 8) | (cur_time.hour > 17):
n_other_jobs = 0
else: #its a weekday, fake an extra 6 jobs to leave 5 nodes open
n_other_jobs = n_fake_jobs
n_jobs = 0
status = os.popen("status -a")
status_list = status.readlines()
for line in status_list:
#are these active or q'd jobs?
if (line.find("Running") > -1):
running = 1
elif (line.find("Waiting") > -1): #all following jobs are in queue not running
running = 0
#if job is mine
if (line.find(username) > 0) & (line.find("interact.q") < 0): #name is in the line, not including first spot
n_jobs = n_jobs + 1
if running == 1: #if active job, check how long its been running and delete it if too long
job_info = line.split() #get job information
start_date = job_info[4].split("/") #split job start date
start_time = job_info[5].split(":") #split time from hours:minutes:seconds format
started = datetime.datetime(int(start_date[2]), int(start_date[0]), int(start_date[1]),
int(start_time[0]), int(start_time[1]), int(start_time[2]))
if ((cur_time - started) > kill_time_limit) & (line.find("stalled") == -1): #if the active job is over max run time, delete it
os.system("qdel %s" % (job_info[0])) #delete the run away job
print("Job %s was deleted because it ran for more than the maximum time." % (job_info[0]))
# if line starts " ###" and isnt an interactive job
elif bool(re.match( "^d+", line )) & (line.find("interact") < 0) & (line.find("(Error)") < 0):
n_other_jobs = n_other_jobs + 1
return n_jobs, n_other_jobs
#make a directory to write job files to and store the start directory
tmp_dir = str(os.getpid())
os.mkdir(tmp_dir)
#read in template
template = template_f.read()
template_f.close()
os.chdir(tmp_dir)
#for each subject
for subnum in subnums:
#for each run
for run in runs:
#Check for changes in user settings
user_settings=("/home/%s/user_settings.txt") % (username)
if os.path.isfile(user_settings):
f=file(user_settings)
settings=f.readlines()
f.close()
for line in settings:
exec(line)
#define substitutions, make them in template
runstr = "%05d" %(run)
tmp_job_file = template.replace( "SUB_USEREMAIL_SUB", useremail )
tmp_job_file = tmp_job_file.replace( "SUB_SUBNUM_SUB", str(subnum) )
tmp_job_file = tmp_job_file.replace( "SUB_DTIFOLDER_SUB", str(dtifolder) )
#make fname and write job file to cwd
tmp_job_fname = "_".join( ["BedpostX_", subnum ] )
tmp_job_fname = ".".join( [ tmp_job_fname, "job" ] )
tmp_job_f = file( tmp_job_fname, "w" )
tmp_job_f.write(tmp_job_file)
tmp_job_f.close()
#wait to submit the job until we have fewer than maintain in q
n_jobs = maintain_n_jobs
while n_jobs >= maintain_n_jobs:
#count jobs
n_jobs, n_other_jobs = _check_jobs(username, kill_time_limit, n_fake_jobs) #count jobs, delete jobs that are too old
#adjust job submission by how may jobs are submitted
#set to minimum number if all nodes are occupied
#should still try to leave # open on weekdays
if ((n_other_jobs+ n_jobs) > (nodes+1)):
n_jobs = maintain_n_jobs - (min_jobs - n_jobs)
if n_jobs >= maintain_n_jobs:
time.sleep(sleep_time)
elif n_jobs < maintain_n_jobs:
cmd = "qsub -v EXPERIMENT=%s %s" % ( experiment, tmp_job_fname )
dummy, f = os.popen2(cmd)
#Check what how long daemon has been running
t1=time.time()
hour=(t1-t0)/3600
log=file(log_name,'a+')
log.write('Daemon has been running for %s hoursn' % hour)
log.close()
now_hr=time.localtime()[3]
if now_hr>prev_hr:
master_clock=master_clock+1
prev_hr=now_hr
serverURL="email.biac.duke.edu"
if master_clock==warning_time:
headers="From: %srnTo: %srnSubject: Daemon job still running!rnrn" % (useremail,useremail)
text="""Your daemon job has been running for %d hours. It will be killed after %d.
To kill it now, log onto the head node and type kill %d""" % (warning_time,max_run_hours,os.getpid())
message=headers+text
mailServer=smtplib.SMTP(serverURL)
mailServer.sendmail(useremail,useremail,message)
mailServer.quit()
elif master_clock==max_run_hours:
headers="From: %srnTo: %srnSubject: Daemon job killed!rnrn" % (useremail,useremail)
text="Your daemon job has been killed. It has run for the maximum time alotted"
message=headers+text
mailServer=smtplib.SMTP(serverURL)
mailServer.sendmail(useremail,useremail,message)
mailServer.quit()
ID=os.getpid()
os.system('kill '+str(ID))
#wait for jobs to complete
#delete them if they run too long
n_jobs = 1
while n_jobs > 0:
n_jobs, n_other_jobs = _check_jobs(username, kill_time_limit, n_fake_jobs)
time.sleep(sleep_time)
#remove tmp job files move to start dir and delete tmpdir
#terminated jobs will prevent this from executing
#you will then have to clean up a "#####" directory with
# ".job" files written in it.
cmd = "rm *.job"
os.system(cmd)
os.chdir(start_dir)
os.rmdir(tmp_dir)
#!/bin/sh
#-------------BedpostX_TEMPLATE---------------------------------------
# This script is intended to perform BedpostX on DTI data, after
# DTI.py has been run. The script first sets up the required input
# files, and then puts results in a folder called "BedpostX" within the DTI directory
# There must be a data_corrected.nii file under Subject/DTI_FOLDER/DTI, and a .bvec
# and .bval file under Subject/DTI_FOLDER
# Output goes under Analysis/FACES/DTI (do we want it here?)
# ---------WHEN DO I RUN THIS?------------
# In the pipeline, you should first run the data through QA to eliminate any bad apples,
# then convert functional and anatomicals to nifti, and then BET with McFlirt before FEAT
#
# --------WHAT DO I NEED TO CHANGE?------------
# all changes should be made in the python script!
# --- BEGIN GLOBAL DIRECTIVE --
#$ -S /bin/sh
#$ -o $HOME/$JOB_NAME.$JOB_ID.out
#$ -e $HOME/$JOB_NAME.$JOB_ID.out
#$ -m ea
# -- END GLOBAL DIRECTIVE --
# -- BEGIN PRE-USER --
#Name of experiment whose data you want to access
EXPERIMENT=${EXPERIMENT:?"Experiment not provided"}
source /etc/biac_sge.sh
EXPERIMENT=`biacmount $EXPERIMENT`
EXPERIMENT=${EXPERIMENT:?"Returned NULL Experiment"}
if [ $EXPERIMENT = "ERROR" ]
then
exit 32
else
#Timestamp
echo "----JOB [$JOB_NAME.$JOB_ID] START [`date`] on HOST [$HOSTNAME]----"
# -- END PRE-USER --
# **********************************************************
# -- BEGIN USER DIRECTIVE --
# Send notifications to the following address
#$ -M user@email.com
# -- END USER DIRECTIVE --
# -- BEGIN USER SCRIPT --
# BedpostX requires the following input files, located in the same input directory:
#
# nodif_brain_mask.nii ----- The brain mask created from the corrected data
# data.nii ----- A 4D series of data volumes. This will include diffusion-weighted volumes and volume(s) with no diffusion weighting.
# bvals ----- A text file w/ list of gradient directions applied
# during diffusion weighted volumes - created in dicom-->nifti conversion
# bvecs ----- A text file containing a list of gradient directions applied during diffusion weighted volumes (also
# is created during dicom --> nifti conversion
# Here are our variables from command line:
DTI_FOLDER=$1 # This is the name of the DTI directory under Data/$SUBJECT
SUBJECT=$2
# Set locations of files needed for BedpostX
DTI_DIR=$EXPERIMENT/Data/$SUBJECT/$DTI_FOLDER
BVEC=$DTI_DIR/.bvec
BVAL=$DTI_DIR/.bval
DATA=$DTI_DIR/DTI/data_corrected*
MASK=$DTI_DIR/BET/nodif_brain_mask.*
# First we must create the input directory
cd $DTI_DIR # Go to the DTI directory
mkdir -p $SUBJECT # Make the BedpostX input directory
# Now we copy the files we need into that directory
cp $BVEC -t $DTI_DIR/$SUBJECT
cp $BVAL -t $DTI_DIR/$SUBJECT
cp $MASK -t $DTI_DIR/$SUBJECT
cd $DTI_DIR/DTI
cp $DATA -t $DTI_DIR/$SUBJECT
# Now we rename these files to the correct names
cd $DTI_DIR/$SUBJECT
rm data_corrected.ecclog
mv .bvec bvecs
mv .bval bvals
DATA=$DTI_DIR/$SUBJECT/data_corrected.*
mv $DATA data.nii.gz
cd $DTI_DIR
# check that the input directory has all the correct files
bedpostx_datacheck $SUBJECT
# run bedpostX
bedpostx $SUBJECT -n "2" -w "1" -b "1000"
# -n is number of fibers, -w is weight, and -b is the burnin, or number of iterations before starting the sampling
# move BedpostX directory to proper location
mv $SUBJECT.bedpostX $EXPERIMENT/Analysis/DTI/BedpostX
# -- END USER SCRIPT -- #
# **********************************************************
# -- BEGIN POST-USER --
echo "----JOB [$JOB_NAME.$JOB_ID] STOP [`date`]----"
OUTDIR=${OUTDIR:-$EXPERIMENT/Data/$SUBJECT/$DTI_DIR/$SUBJECT}
mv $HOME/$JOB_NAME.$JOB_ID.out $OUTDIR/$JOB_NAME.$JOB_ID.out
RETURNCODE=${RETURNCODE:-0}
exit $RETURNCODE
fi
# -- END POST USER--