Lab IT: Difference between revisions
Jump to navigation
Jump to search
m (→Wiki server) |
m (→Wiki server) |
||
(162 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
==NAS== | ==Current Genome-miner Private Network Configuration Info== | ||
'''NOTE: Any new connection to the private server network (via Netgear switch) needs a manually assigned unique IP address in the same subnet - 192.168.137.XXX''' | |||
*GenomeMiner Dell PowerEdge T630 (model 613NHB2) | |||
**Ethernet Port 1: 14:18:77:72:0f:bc | |||
***IP Address: 132.239.25.238 (Assigned by UCSD Hostmaster on August 30th 2016, set by Automatic DHCP) | |||
***Gateway: 132.239.25.1 | |||
***Subnetmask: 255.255.255.0 | |||
***Nameserver: 132.239.0.252 | |||
**Ethernet Port 2: 14:18:77:72:0f:bd | |||
***Connection name: Netgear Switch | |||
***IP Address: 192.168.137.11 | |||
***Gateway: <leave blank otherwise we'll need to set the priority for two gateways> | |||
***Subnetmask: 255.255.255.0 | |||
***Nameserver: <blank> | |||
*Synology DS2411+ (ZhangLabNAS1) | |||
**Location: Sequencing room attached to ethernet switch | |||
**IP Address: 192.168.137.5 | |||
**MAC Address: See Synology box | |||
**Gateway: 192.168.137.1 | |||
**Subnetmask: 255.255.255.0 | |||
*Synology DS2413+ (ZhangLabNAS2)- '''Currently this device is not on a private network''' | |||
**Location: PFBH 038 Confocal Microscope Room | |||
**IP Address: 132.239.25.1 | |||
**MAC Address: 00:11:32:19:59:EF & 00:11:32:19:59:F0 | |||
**Gateway: 132.239.25.1 | |||
**Subnetmask: 255.255.255.0 | |||
*Synology DS2415+ (ZhangLabNAS3) | |||
**Location: Sequencing room attached to ethernet switch | |||
**IP Address: 192.168.137.6, 192.168.137.7 | |||
**MAC Address: See Synology box | |||
**Gateway for both: 192.168.137.1 | |||
**Subnetmask for both: 255.255.255.0 | |||
==Register a ethernet connection to the campus network== | |||
*[http://netapps-web.ucsd.edu/cgi-bin/etherreg/etherform.pl Sign-up form] | |||
*Must include device MAC address | |||
==Web-Based Monitoring and Alarm Notification Web600== | |||
*[http://132.239.25.73] | |||
*Mac Address: 00:07:F9:00:58:83 | |||
*Plugged into Port 4.1.34 B in Zhang main lab | |||
*132.239.25.73 | |||
*gateway 132.239.25.1 | |||
*netmask 255.255.255.0 | |||
*nameserver 132.239.0.252 | |||
*admin:Zhang lab's common password | |||
==TP-Link Archer C1900 router== | |||
* Admin link: http://tplinkwifi.net/ | |||
* user: ZhangLab | |||
* pwd: genomclub | |||
==Time Capsule NAS== | |||
*Apple Time Capsule 877b54 | *Apple Time Capsule 877b54 | ||
**MAC address: 00:1F:00:1F:5B:87:7B:54 | **MAC address: 00:1F:00:1F:5B:87:7B:54 | ||
**Wireless password: Zhang lab's common password. | **Wireless password: Zhang lab's common password. | ||
**Backup of lab wiki, also serves as a wireless router. | **Backup of lab wiki, also serves as a wireless router. | ||
==Lab website (WordPress)== | |||
*Create a MySQL database: | |||
/opt/local/lib/mysql5/bin/mysql -u root -p | |||
create database ZhangLab_WP_DB; | |||
create user 'WP_user'@'localhost' identified by '7d7c65LzW4'; | |||
GRANT ALL PRIVILEGES ON ZhangLab_WP_DB.* TO 'WP_user'@'localhost'; | |||
FLUSH PRIVILEGES; | |||
*Download wordpress-4.5.3.tar.gz, uncompress, and edit wp-config.php. | |||
*Copy the wordpress folder to the web server directory. | |||
==Wiki server== | ==Wiki server== | ||
*MacMini | *MacMini, Mac OS X, | ||
**Hostname: genome-tech.ucsd.edu | |||
**IP address: 132.239.25.34 | |||
**Public site: http://genome-tech.ucsd.edu/public | **Public site: http://genome-tech.ucsd.edu/public | ||
**Private wiki: | **Private wiki: http://genome-tech.ucsd.edu/LabNotes , password protected. | ||
**Server Root: /Library/WebServer/Documents | |||
*** How to add a new user: | *** How to add a new user: | ||
cd /etc/httpd/ | cd /etc/httpd/ | ||
Line 19: | Line 85: | ||
Launchd deamon: [[Media:labwiki.backup.txt|/Library/LaunchDeamons/labwiki.backup]] | Launchd deamon: [[Media:labwiki.backup.txt|/Library/LaunchDeamons/labwiki.backup]] | ||
Script for backup: [[Media:wiki_backup.sh.txt|/Library/Scripts/wiki_backup.sh]] | Script for backup: [[Media:wiki_backup.sh.txt|/Library/Scripts/wiki_backup.sh]] | ||
*Server migration from Mac Mini to Mac Pro: [[Lab_IT_server_migration |Migration Log]] | |||
*Reads from genome-miner are shared in /Volumes/genome-miner-SeqStore2/ (See [http://genome-tech.ucsd.edu/LabNotes/index.php/Athurva_Gore/LabNotes/2009-12-23#Mounting_SeqStore2_from_Genome-Tech]) | |||
*MediaWiki was upgrade from v 1.13.2 to v 1.15.1 | |||
apachectl stop | |||
#backup the mysql databases | |||
mv LabNotes LabNotes.v1.13.2 | |||
tar -xzvf mediawiki-1.15.1.tar.gz | |||
mv mediawiki-1.15.1 LabNotes | |||
cp LabNotes.v1.13.2/LocalSettings.php LabNotes | |||
cp -r LabNotes.v1.13.2/extensions LabNotes | |||
cp /LabNotes/AdminSettings.sample /LabNotes/AdminSettings.php | |||
#edit the /LabNotes/AdminSettings.php file for the admin previlege | |||
cd LabNotes/maintenance | |||
php update.php --aconf ../AdminSettings.php | |||
cd ../.. | |||
mv LabNotes.v1.13.2/upload LabNotes | |||
mv LabNotes.v1.13.2/zhangloupload LabNotes | |||
apachectl start | |||
*[http://www.mediawiki.org/wiki/Extension:SphinxSearch Sphinx 0.9.9 full text search engine] installed (Jan-23-2010). Updated on 10-Apr-2013 (/Users/kunzhang/www/LabNotes/extensions/SphinxSearch). | |||
*MediaWiki was upgrade from v 1.15.1 to v 1.18.2 (April-08-2012) | |||
mv LabNotes LabNotes.v1.13.2 | |||
tar -xzvf mediawiki-1.18.2.tar.gz | |||
mv mediawiki-1.18.1 LabNotes.v1.18.2 | |||
cp LabNotes/LocalSettings.php LabNotes.v1.18.2 | |||
cp -r LabNotes.v1.13.2/extensions LabNotes | |||
cd LabNotes.v1.18.2/maintenance | |||
php update.php | |||
sudo mv LabNotes/upload LabNotes.v1.18.2 | |||
sudo mv LabNotes/zhangloupload LabNotes.v1.18.2 | |||
sudo mv probedesign ../LabNotes.v1.18.2/ | |||
sudo mv PeakPicker.jar ../LabNotes.v1.18.2/ | |||
sudo mv SeqScannerInstall.exe ../LabNotes.v1.18.2/ | |||
sudo mv RobertsLabNotesSupplement ../LabNotes.v1.18.2/ | |||
mv LabNotes LabNotes.v1.15.1 | |||
mv LabNotes.v1.18.2 LabNotes | |||
download the latest version of Sphinx extension: SphinxSearch-MW1.18-r92378.tar.gz | |||
Extract the tarball and copy all the files to LabNote/extension/SphinxSearch folder. | |||
To rebuild the mysql database from mysql dump: | |||
/usr/local/mysql/bin/mysql -u wikiuser -pPASSWORD wikidb < wiki_db_current.sql | |||
Update on 2/6/2013 | |||
I installed the latest version of mysql5 using MacPorts | |||
sudo /opt/local/bin/port install mysql5-server | |||
Note that the mysql files were installed at /opt/local/bin | |||
To start the MySQL server on launch, | |||
sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql5.plist | |||
Update on 5/24/2017 | |||
I had to update Apache to v2.2.32 to deal with a security hole. However, | |||
that led to a cascade of issues that took me many attempts to figure out. | |||
Eventually I upgraded PHP to V5.6, upgraded Mysql server to v5.5 but then changed it back to V5.2. | |||
Then Mediawiki v1.18 -> v1.19.9 -> v1.20.2 -> v1.24.1 (This upgrade solved the [https://www.mediawiki.org/wiki/Manual:Errors_and_symptoms#All_pages_have_no_content.2C_but_when_editing_a_page_the_wiki_text_is_there problem of not seeing any page content.]) | |||
==Meangenemachine== | |||
* Custom built: Dual Intel Xeon X5645 (12 cores), 32 GB RAM | |||
* Booting from 500GB drive | |||
* Drive space: 600GB_store, 1TB_store1, 1TB_store2, 2TB_store1, 2TB_store2 | |||
* Ubuntu 11.04 | |||
* For general use | |||
****address: '''meangenemachine.dynamic.ucsd.edu ''' | |||
==Genome-analyzer (old genome-miner)== | |||
*Below is information from the old genome-miner labwiki information, since it's the same computer | |||
*MacPro (16 core, 23GB RAM, 3TB+6TB+6TB) | |||
**MAC addresses: | |||
***Port 1: 00:25:00:ee:6f:c8; connected to campus network | |||
****IP=132.239.135.41 | |||
****Subnet=255.255.255.192 | |||
****Gateway=132.239.135.1 | |||
****DNS=127.0.1.1 | |||
*Installed ubuntu 16.04 LTS on 8/30/2016 | |||
*Can be accessed at <genome-analyzer.ucsd.edu> | |||
==Genemapster== | |||
* Custom built: Dual Intel Xeon X5520 (8 cores), 32 GB RAM | |||
* Ubuntu 10.04.3 LTS | |||
* For general use and for running GPU softwares | |||
****address: '''genemapster.dynamic.ucsd.edu ''' | |||
==Genome-miner== | |||
===Current Data Storage=== | |||
* Note 1:RO - Read only; RW - Read and write | |||
* Note 2: Our plan right now is to have people migrate to using new volumes once every 3-4 years and making older volumes become long-term storage. As our drives are getting older, we need a better way to monitor them so that failed disks can be replaced as soon as possible in order to avoid volume crashes. Raid 5 have 4+1 redundancy, so each 12-bay box can handle up to 2 disk failures. | |||
* Note 3: Home_Raid1 and Scratch_SSD are software raids. | |||
{| class="wikitable" | |||
| align="center" style="background:#f0f0f0;"|'''Directory Name''' | |||
| align="center" style="background:#f0f0f0;"|'''Total HDD size''' | |||
| align="center" style="background:#f0f0f0;"|'''Raid''' | |||
| align="center" style="background:#f0f0f0;"|'''Read speed''' | |||
| align="center" style="background:#f0f0f0;"|'''Write speed''' | |||
| align="center" style="background:#f0f0f0;"|'''Permissions''' | |||
| align="center" style="background:#f0f0f0;"|'''Descriptions''' | |||
|- | |||
| Home_Raid1||6TB||Raid 1||TBD||TBD||RW||Home directories with mirroring | |||
|- | |||
| Scratch_SSD||4TB||Raid 0||TBD||TBD||RW||Working directories, not good for long term storage | |||
|- | |||
| 12TB_ext ||12TB||Raid 0||TBD||TBD||RW||Working directories, not good for long term storage | |||
|- | |||
| NAS2_volume1||30TB||Raid 5||TBD||TBD||RO||Confocal Store & long term storage | |||
|- | |||
| NAS3_volume1||60TB||Raid 5||TBD||TBD||RO||SeqStore2016 & FreshReads | |||
|- | |||
| NAS3_volume2||60TB||Raid 5||TBD||TBD||RW||Long term storage & working | |||
|- | |||
| NAS1_volume1||30TB ||Raid 5||TBD||TBD||Not mounted||Long term storage & working | |||
|- | |||
| NAS1_volume2||30TB||Raid 5||TBD||TBD||Not mounted||Long term storage & working | |||
|- | |||
| NAS2_volume2||60TB||Raid 5||TBD||TBD||Not mounted|| '''Unused Volume''' | |||
|} | |||
===Summer 2016=== | |||
* Summer 2016 - new Dell PowerEdge T630 server tower | |||
**Chassis with up to 8, 3.5" Hard Drives, Software RAID, Tower Configuration | |||
**Dual Intel® Xeon® E5-2697A v4 2.6GHz,40M Cache,9.60GT/s QPI,Turbo,HT,16C/32T (145W) Max Mem 2400MHz | |||
**2400MT/s RDIMMs | |||
**2x 32GB RDIMM, 2400MT/s, Dual Rank, x4 Data Width | |||
**No RAID with Embedded SATA (1 SATA HDD or SATA SSD) | |||
**Embedded SATA | |||
**1x 240GB Solid State Drive SATA Mix Use MLC 6Gbps 2.5in Hot-plug Drive,3.5in HYB CARR, SM863 | |||
**Single, Hot-plug Power Supply (1+0), 750W | |||
**3 Year Basic Hardware Warranty Repair, 5X10 HW-Only, 5x10 NBD On-site | |||
* Installation notes here: [[File:final_configurations.sh]] | |||
===Dinh=== | |||
*Re-installed Ubuntu in December 2014. See notes [http://genome-tech.ucsd.edu/LabNotes/index.php/Dinh:Genome_Miner_Upgrades here]. | |||
===Robert/Athurva Era=== | |||
*Recently reinstalled using MacOS instead of Ubuntu; see notes [[Genome_Miner_Reinstall |here]] | |||
*A MacPro (8 core, 16GB RAM, 4TB), dedicated for GA Pipeline (PFBH406) | |||
**MAC addresses: | |||
***Port 1: 00:25:00:ee:6f:c8; connected to the campus network; | |||
****IP=132.239.135.41; | |||
****Subnet: 255.255.0.0 | |||
****Gateway: 132.239.135.1 | |||
****DNS: 132.239.0.252, 128.54.16.2 | |||
***Port 2: 00:25:00:ee:3f:85; connected to GA PC; | |||
****IP=192.168.137.11 | |||
*Installation of Ubuntu 9.04 (AMD64 Desktop). | |||
**Installed rEFIt for dual booting (http://refit.sourceforge.net/). | |||
**Installed Ubuntu 9.04 Desktop from ISO image. | |||
***Turned out that all the hard drives were not recognized with the RAID card. Removed the RAID card. | |||
***Press "c" to boot from the CD. | |||
***The OS was install in the hard drive in Bay 4. | |||
***The other two hard drives in Bay 2&3 were also reformatted to ext3 | |||
**Ubuntu booted up correctly, but without network connection. Turned out that the Ethernet card was too new. | |||
***Downloaded the driver from sf.net/projects/e1000, manually changed the following two files, compiled and installed the driver. | |||
Added the following to the netdev.c: | |||
{PCI_VDEVICE(INTEL, 0x10F6), board_82574} | |||
and to hw.h | |||
#define E1000_DEV_ID_82574L_NEW 0x10F6 | |||
*Installed GA_pipeline 1.4 | |||
**Install the Ubuntu Development Package: | |||
apt-get install build-essential | |||
**Installed fftw-3.2.1, need to compile with --enable-single; | |||
**Installed gnuplot; | |||
**Installed ImageMagik (for the convert function); | |||
**Installed zlib, bzlib2; | |||
**Install XML::Simple perl module. | |||
**Run make, all prerequisites passed. | |||
**Installed UNAFold (http://dinamelt.bioinfo.rpi.edu/) in /usr/local/ | |||
===Apache Web Server=== | |||
* Brandon said we'd better have a web server on Genome-miner since majority of the data were analyzed there. | |||
* Make your own folder in /opt/lampp/htdocs/ | |||
* Put the files you want to share in your own folder | |||
* Example: http://132.239.25.238/shg047/ | |||
* Usage: http://132.239.25.238/hello.pl | |||
==TSCC cluster== | |||
*UPDATE: Triton is expected to go down by the start of July 2013. Home files will be moved to the TSCC servers and /project will be remounted on the TSCC servers (warning: uid/gid might be changed on project drive!) | |||
* INFO on data transfer and storage on TSCC: http://rci.ucsd.edu/computing/storage/data-transfer.html | |||
* INFO on starting jobs on TSCC: http://rci.ucsd.edu/computing/jobs/index.html | |||
*The Zhang lab current has four "home" nodes each having 16 cores and 64Gb of RAM. This is the best option when dealing with a large amount of data, such sequencing reads from one or more HiSeq runs. To use these nodes, you will need your own accounts at SDSC TSCC cluster (tscc-login.sdsc.edu). Once you logging in your account, you can create job files, and submit them to the queue: | |||
[#PBS -A k4zhang-group] - this line may not need to be in job file to submit to home, condo, or glean. | |||
Then submit job to a specific queue with | |||
Submit to home queue: qsub -q home-k4zhang job_name | |||
Submit to condo queue: qsub -q condo job_name -W group_list=condo-group | |||
Submit to glean queue: qsub -q glean job_name -W group_list=condo-group | |||
Submit to hotel queue (pay-per-use): qsub -q hotel job_name | |||
Submit to pdafm queue (pay-per-use): qsub -q pdafm job_name | |||
* Our home nodes IDs are: '''tscc-2-33''', '''tscc-2-35''', '''tscc-2-37''', '''tscc-2-39''' | |||
* The cheapest nodes for running jobs are: '''glean''' (runs free, no time limit), '''home-k4zhang''' (run on purchased time, no time limit), '''condo''' (run on purchased time, 8 hrs limit). | |||
* The pay-as-you-go nodes (72 hours limit) for running jobs are: '''hotel''' (can specify GPU), and '''pdafm''' (max 512GB memory, specify less fewer processors/memory to get charged less) | |||
* '''Node recommendations''': | |||
High priority jobs should be run with '''home-k4zhang''' or '''condo'''. | |||
When running '''home-k4zhang''', we can only start a limited number of jobs. Also, run ''qstat'' to see that no one in our lab is using '''glean''' | |||
To start many high priority jobs which takes less than 8 hours to complete, use '''condo''' | |||
Intermediate to low priority can be run with '''glean''', there is unlimited time but may get kicked off at any time. | |||
Run '''glean''' on our nodes to lower the risks of getting kicked off, use any of the following lines to specify which of our nodes to run on: | |||
#PBS -l nodes=tscc-2-33:ppn=X | |||
#PBS -l nodes=tscc-2-35:ppn=X | |||
#PBS -l nodes=tscc-2-37:ppn=X | |||
#PBS -l nodes=tscc-2-39:ppn=X | |||
(X = number of processors, if not specified, the entire node might be used and we can't start more than one jobs on each node.) | |||
The only times we would get kicked off '''glean''' would be when someone in our lab starts a job with '''home-k4zhang''' or when '''condo''' is at full capacity (very unlikely!) | |||
* '''File storage and access''': | |||
Do not transfer large files with ''scp'', install and use ''bbftp'' and start a job to transfer files. | |||
( | |||
Large files for ''multiple uses'' can be transferred to /oasis (it have faster read/write than on /project or /home)! | |||
Small or large files for ''one-time'' use can be transferred to the path provided by the TMPDIR environment variable (this will copy the file to a local scratch space) then immediately processed. | |||
Once the job finishes, any file in this local scratch space will be lost. | |||
Always copy results back to our lab server to avoid losing them. | |||
Avoid as much as possible reading and writing to /project or /home. Use /oasis instead. | |||
* '''Checking the output of a job while the job is still running:''' | |||
*#Use qstat -u [your_username_here] to get the job id: should look something like "XXXXXXX.tscc-mgr.local" where the 7 digits before ".tscc-mgr.local" are your job ID. | |||
*#Use checkjob [your_job_id_here] to get info about your job. The node your job is running on should be under "Allocated Nodes" in the format tscc-x-XX:YY where the two digits after the colon are the number of processors requested. | |||
*#ssh into the node your job is using "ssh tscc-x-XX" | |||
*#cd /var/spool/torque/spool | |||
*#There should be two files associated with your job id labeled something like "XXXXXXX.OU" and "XXXXXXX.ER". These are the stdout and stderr files, respectively and you can view them to check the output of your job. This is a convenient way to check job progress | |||
* ''' Installing perl modules locally''' | |||
**Use cpan to install perl modules. | |||
**First determine where you want your local Perl modules, ie /home/userID/perlmods. | |||
**From within CPAN shell, enter the following (2) commands: | |||
*# cpan[1]> o conf mbuildpl_arg "--install_base /home/userID/perlmods" | |||
*# cpan[2]> o conf makepl_arg "PREFIX=/home/userID/perlmods" | |||
** If you want to save these settings permanently, type 'o conf commit' (without the quotes). Otherwise you need to run the above commands everytime you start CPAN. | |||
** Now to install a module: | |||
*# cpan[3] install Statistics::Descriptive | |||
** Next, set the PERL5LIB path by adding the following lines to ~/.bash_profile or ~/.bashrc | |||
*# export PERL5LIB=/home/userID/perlmods:$PERL5LIB | |||
** Finally source the .bash_profile or .bashrc so you could use it immediately. | |||
*# source ~/.bash_profile | |||
==Printers== | ==Printers== | ||
*HL-5250DN, PFBH406, attached to ZhangLab-GelDoc through USB. | *Brother HL-5250DN, PFBH406, attached to Vaio-Z540 through USB. IP=192.168.1.24. | ||
** Used to be attached to ZhangLab-VGN through USB. IP=132.239.135.123. Before that, attached to ZhangLab-GelDoc through USB. IP=132.239.25.45 | |||
** Can access via smb://192.168.1.24/BHL5250DN on Macs. | |||
*Brother HL-4150CN, PFBH402, network printer, IP=172.16.25.87. | |||
*Brother MFC-7220, PFBH402, attached to Time Capsule through USB. | |||
*Brother HL-L2340D, PFBH406, network printer, IP=192.168.1.22. Access through ZhangLab network. | |||
==Share computers== | ==Share computers== | ||
*ZhangLab-GelDoc, PC desktop, attached to the BioRad GelDocXL system. | *ZhangLab-GelDoc, PC desktop, attached to the BioRad GelDocXL system. | ||
**IP: 132.239.25.45. | |||
**To map the shared data folder, use the network path: \\132.239.25.45\GelImages | |||
**To map the shared data folder on mac, use the network smb: \\132.239.25.45\GelImages | |||
*ZhangLab-Chromo4, PC desktop, attached to Biorad Chromo4 Thermalcycler. | *ZhangLab-Chromo4, PC desktop, attached to Biorad Chromo4 Thermalcycler. | ||
**IP: 132.239.135.43. | |||
**To map the shared data folder, use the network path: \\132.239.135.43\Users | |||
*ZhangLab-VGN, PC laptop. | *ZhangLab-VGN, PC laptop. | ||
*ZhangLab-IX81, the WinXP workstation attached to the Olympus IX81 microscope. \\172.16.25.86 MAC address: 00-25-64-A8-9D-1C, the password is "GenomeClub". | |||
==Biogem Data== | |||
* To obtain data from Biogem, you must have sudo privileges. | |||
* On genome-miner, issue the command: | |||
sudo mount /media/Biogem | |||
* To unmount the drive once you are done, type: | |||
sudo umount /media/Biogem | |||
==RAID Enclosures== | |||
* Buffalo DriveStation Quattro™ - HD-QSSU2/R5 | |||
** Ext4T. Maximum capacity is 8TB (4x2TB) | |||
* Other World Computing (OWC) Mercury Elite Pro Qx2 RAID | |||
** Ext12T | |||
===Mounting the RAID Enclosures=== | |||
* Athurva's notes (October 29th, 2014): | |||
** The particular version of Linux on genome-miner sometimes has trouble scanning large drives with mount. You can use gparted to scan the drives, and then the mount command will work. | |||
** If you launch gparted, go to the troublesome drive, right click, choose manage flags, uncheck and then recheck the raid flag, and then hit ok, the drive will get rescanned and mount commands will then work from the command line. | |||
==Synology NAS - Diskstation== | |||
Synology NAS on genome-miner.ucsd.edu: | |||
{| class="wikitable" | |||
|- | |||
! Path | |||
! Capacity | |||
! RAID level | |||
! Model | |||
! MasterOrSlave | |||
! Location | |||
|- | |||
| /media/NAS1_volume1 | |||
| 30 Tb | |||
| RAID 5 | |||
| DS2411+ | |||
| Master | |||
| PBFH 447 | |||
|- | |||
| /media/NAS1_volume2 | |||
| 30 Tb | |||
| RAID 5 | |||
| DX1211 | |||
| Slave | |||
| PBFH 447 | |||
|- | |||
| /media/NAS2_volume1 | |||
| 30 Tb | |||
| RAID 5 | |||
| DS2413+ | |||
| Master | |||
| PFBH 036 | |||
|- | |||
| /media/NAS2_volume2 | |||
| 60 Tb | |||
| RAID 5 | |||
| DX1215 | |||
| Slave | |||
| PFBH 036 | |||
|- | |||
| /media/NAS3_volume1 | |||
| 60 Tb | |||
| RAID 5 | |||
| DS2415+ | |||
| Master | |||
| PBFH 447 | |||
|- | |||
| /media/NAS3_volume2 | |||
| 60 Tb | |||
| RAID 5 | |||
| DX1215 | |||
| Slave | |||
| PBFH 447 | |||
|} | |||
===Install Notes=== | |||
* Installed Synology Diskstation with 6 Hitachi 3 TB Drives | |||
** Part numbers: 0S03230 (Bays 1-3) and 0F12460 (Bays 4-6) | |||
** Seems to be the same drive, but from different production cycles? | |||
** Diskstation recognizes these as the same drive model | |||
* Installed Synology DSM Software using included disk | |||
* Currently formatting all six drives as a RAID5 - ~15 Tb storage space | |||
** Decided to perform full disk check to make sure we do not have any failures | |||
===Administering the NAS=== | |||
* Visit its IP Address (currently 192.168.1.22:5000, you must be on the ZhangLab wireless router) with any web browser | |||
* Username is admin, password is Zhang lab's common password | |||
* Can control disks, file sharing, and more from web interface | |||
===Accessing the NAS=== | |||
* October 29, 2014: | |||
** Can log onto the Synology via 192.168.137.5:5000 on genome-miner using zhanglab and the Zhang lab's common password. | |||
** Athurva's command example to mount Synology drives on genome-miner: | |||
sudo mount.cifs \\\\192.168.137.5\\Syn_15T /media/Syn_15T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm | |||
sudo mount.cifs \\\\192.168.137.5\\LTS_15T /media/LTS_15T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm | |||
sudo mount.cifs \\\\192.168.137.5\\LTS_33T /media/LTS_33T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm | |||
**Athurva's mounts: | |||
//192.168.137.5/LTS_15T/ ... ... ... 68% /media/LTS_15T | |||
//192.168.137.5/LTS_33T/ ... ... ... 69% /media/LTS_33T | |||
//192.168.137.5/Syn15T/ ... ... ... 23% /media/Syn_15T | |||
===Installation Plans=== | |||
====Step 1==== | |||
* Copy SeqStore2 and other contents of Ext9T and Ext6T to 15 Tb partition | |||
* Erase Ext9T and Ext6T | |||
* Move the 3 functional 3TB drives from Ext9T to Diskstation | |||
** Set them up as either individual drives or RAID0 most likely (Scratch Space) | |||
* Load a backup 2TB drive into Ext9T | |||
* Re-setup each as either RAID5 or RAID0 | |||
====Step 2==== | |||
* Dr. Zhang has ordered 5 more 3TB hard drives - load 3 of them into Diskstation | |||
** Set them up as either individual drives or RAID1? (Use for more long-term storage | |||
====Step 3==== | |||
* Next, temporarily back up Ext4T to Diskstation | |||
* Erase Ext4T, remove the two drives | |||
* Load two 3TB drives into Ext4T, set up as a 6 TB RAID0 | |||
* Move data back from Diskstation to Ext4T | |||
====Results==== | |||
* This will leave us with: | |||
** Synology Diskstation: 15 TB RAID5, 18 TB individual disks, joined, or RAID0 | |||
** Ext9T: 6-8 TB RAID5 or RAID0 | |||
** Ext6T: 6-8 TB RAID5 or RAID0 | |||
** Ext4T: 6 TB RAID0 | |||
** Internal Storage on Genome-Miner: 8 TB in 4 disks | |||
* This will leave us with 3 2TB hard drives left over for backups. | |||
** Should order more Hitachi drives of this model; can use them as hotswaps when RAID fails | |||
===Synology NAS DiskStation (NAS2 in the microscope room)=== | |||
====Install Notes==== | |||
*Model Name: DS2413+ | |||
*Installed at 02/01/2013 | |||
*Location : PFBH 038 Confocal room | |||
====Network Information==== | |||
*Updated: 1/7/2016 - Due to volume crash in 12/2015 had to change IP address for Synology to remote access | |||
*MAC address | |||
**00:11:32:19:59:EF | |||
**00:11:32:19:59:F0 | |||
*IP Address | |||
**132.239.25.19 | |||
**gateway 132.239.25.1 | |||
**Subnetmask 255.255.255.0 | |||
**nameserver 132.239.0.252 (not sure about this) | |||
*Server Name: ZhangLabNAS2 | |||
*Access | |||
**bioeng-25-19:5000 (or http://132.239.25.19:5000) in any web browser (The given machine name of primary IP address from UCSD hostmaster is "bioeng-25-19") | |||
**Username is admin, password is Zhang lab's common password (capital G and C) | |||
***2 failed login attempts within 10 min will lock you out for a day | |||
<!--*MAC address | |||
**00:11:32:19:59:EF | |||
**00:11:32:19:59:F0 | |||
*IP Address | |||
**172.16.25.42(<-- Primary IP address) | |||
**172.16.25.41(<-- Secondary IP address) | |||
**gateway 172.16.25.1 | |||
**Subnetmask 255.255.255.0 | |||
**nameserver 132.239.0.252 | |||
*Server Name: ZhangLabNAS2 | |||
*Access | |||
**bioeng16-25-42:5000 (or http://172.16.25.42:5000) in any web browser (The given machine name of primary IP address from UCSD hostmaster is "bioeng16-25-42") | |||
**Username is admin, password is Zhang lab's common password | |||
--> | |||
====HDD Install and Volume management==== | |||
*12 HDD have been installed (3TB each, Toshiba, total 36TB, --> 30TB available + 6TB backup). | |||
*Created volume as RAID5 | |||
===Synology NAS DiskStation (NAS3 in sequencing room)=== | |||
====Install Notes==== | |||
*Model Name: DS2415+, 12 X 6TB drives | |||
*Installed at 10/17/2015 | |||
*Location : Gene Sequencing Room, connected to NetGear Switch | |||
* Drive mount command: | |||
sudo mount.cifs \\\\192.168.137.6\\LTS_60T /media/LTS_60T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm | |||
* Installation guide: | |||
** (1) Plug all the drives in. | |||
** (2) Connect at least one ethernet connection from new NAS to NetGear switch, and second ethernet connection from NetGeat switch to a laptop. (Probably can't use Genome-miner unless we can disconnect Genome-miner from the wall jack). For the second ethernet connection, borrow the one connected to the GA IIx PC. | |||
** (3) Configure the network IPv4 to the same IP range and subnet. I used 192.168.137.10 and 255.255.255.0 | |||
** (4) Download and install Synology Assistant. If you haven't already, download the .pat file for installing the latest DSM on the new Synology. Turn off wireless networking. | |||
** (5) Run Synology Assistant, should be able to see both the DiskStation and new Synology NAS | |||
** (6) Install and configure the network connection for the new NAS: | |||
*** manually set the same IP range as the others, and subnet 255.255.255.0 | |||
** (7) Once the new NAS is completely installed, log onto the NAS and set up a new Share Folder. Set up ZhangLab user account. Make sure guest account is disabled. Configure other settings to allow SSH, NFTP, file sharing, etc. | |||
====Network Information==== | |||
*IP Address | |||
** 192.168.137.6 | |||
** 192.168.137.7 | |||
**Default gateway: 192.168.137.1 | |||
**Subnetmask: 255.255.255.0 | |||
**DNS server: 192.168.137.1 | |||
*Server Name: ZhangLabNAS3 | |||
*Access | |||
**Only via the NetGear switch and must be in the same IP range. | |||
**Username is admin, password is Zhang lab's common password |
Latest revision as of 17:37, 13 June 2017
Current Genome-miner Private Network Configuration Info[edit]
NOTE: Any new connection to the private server network (via Netgear switch) needs a manually assigned unique IP address in the same subnet - 192.168.137.XXX
- GenomeMiner Dell PowerEdge T630 (model 613NHB2)
- Ethernet Port 1: 14:18:77:72:0f:bc
- IP Address: 132.239.25.238 (Assigned by UCSD Hostmaster on August 30th 2016, set by Automatic DHCP)
- Gateway: 132.239.25.1
- Subnetmask: 255.255.255.0
- Nameserver: 132.239.0.252
- Ethernet Port 2: 14:18:77:72:0f:bd
- Connection name: Netgear Switch
- IP Address: 192.168.137.11
- Gateway: <leave blank otherwise we'll need to set the priority for two gateways>
- Subnetmask: 255.255.255.0
- Nameserver: <blank>
- Ethernet Port 1: 14:18:77:72:0f:bc
- Synology DS2411+ (ZhangLabNAS1)
- Location: Sequencing room attached to ethernet switch
- IP Address: 192.168.137.5
- MAC Address: See Synology box
- Gateway: 192.168.137.1
- Subnetmask: 255.255.255.0
- Synology DS2413+ (ZhangLabNAS2)- Currently this device is not on a private network
- Location: PFBH 038 Confocal Microscope Room
- IP Address: 132.239.25.1
- MAC Address: 00:11:32:19:59:EF & 00:11:32:19:59:F0
- Gateway: 132.239.25.1
- Subnetmask: 255.255.255.0
- Synology DS2415+ (ZhangLabNAS3)
- Location: Sequencing room attached to ethernet switch
- IP Address: 192.168.137.6, 192.168.137.7
- MAC Address: See Synology box
- Gateway for both: 192.168.137.1
- Subnetmask for both: 255.255.255.0
Register a ethernet connection to the campus network[edit]
- Sign-up form
- Must include device MAC address
Web-Based Monitoring and Alarm Notification Web600[edit]
- [1]
- Mac Address: 00:07:F9:00:58:83
- Plugged into Port 4.1.34 B in Zhang main lab
- 132.239.25.73
- gateway 132.239.25.1
- netmask 255.255.255.0
- nameserver 132.239.0.252
- admin:Zhang lab's common password
TP-Link Archer C1900 router[edit]
- Admin link: http://tplinkwifi.net/
- user: ZhangLab
- pwd: genomclub
Time Capsule NAS[edit]
- Apple Time Capsule 877b54
- MAC address: 00:1F:00:1F:5B:87:7B:54
- Wireless password: Zhang lab's common password.
- Backup of lab wiki, also serves as a wireless router.
Lab website (WordPress)[edit]
- Create a MySQL database:
/opt/local/lib/mysql5/bin/mysql -u root -p
create database ZhangLab_WP_DB; create user 'WP_user'@'localhost' identified by '7d7c65LzW4'; GRANT ALL PRIVILEGES ON ZhangLab_WP_DB.* TO 'WP_user'@'localhost'; FLUSH PRIVILEGES;
- Download wordpress-4.5.3.tar.gz, uncompress, and edit wp-config.php.
- Copy the wordpress folder to the web server directory.
Wiki server[edit]
- MacMini, Mac OS X,
- Hostname: genome-tech.ucsd.edu
- IP address: 132.239.25.34
- Public site: http://genome-tech.ucsd.edu/public
- Private wiki: http://genome-tech.ucsd.edu/LabNotes , password protected.
- Server Root: /Library/WebServer/Documents
- How to add a new user:
cd /etc/httpd/ sudo htpasswd passwords new_user_name Edit the httpd.conf file. Add the new_user_name to the following line: Require user kunzhang ...some_existing_user_name
- Wiki backup
Launchd deamon: /Library/LaunchDeamons/labwiki.backup Script for backup: /Library/Scripts/wiki_backup.sh
- Server migration from Mac Mini to Mac Pro: Migration Log
- Reads from genome-miner are shared in /Volumes/genome-miner-SeqStore2/ (See [2])
- MediaWiki was upgrade from v 1.13.2 to v 1.15.1
apachectl stop #backup the mysql databases mv LabNotes LabNotes.v1.13.2 tar -xzvf mediawiki-1.15.1.tar.gz mv mediawiki-1.15.1 LabNotes cp LabNotes.v1.13.2/LocalSettings.php LabNotes cp -r LabNotes.v1.13.2/extensions LabNotes cp /LabNotes/AdminSettings.sample /LabNotes/AdminSettings.php #edit the /LabNotes/AdminSettings.php file for the admin previlege cd LabNotes/maintenance php update.php --aconf ../AdminSettings.php cd ../.. mv LabNotes.v1.13.2/upload LabNotes mv LabNotes.v1.13.2/zhangloupload LabNotes apachectl start
- Sphinx 0.9.9 full text search engine installed (Jan-23-2010). Updated on 10-Apr-2013 (/Users/kunzhang/www/LabNotes/extensions/SphinxSearch).
- MediaWiki was upgrade from v 1.15.1 to v 1.18.2 (April-08-2012)
mv LabNotes LabNotes.v1.13.2 tar -xzvf mediawiki-1.18.2.tar.gz mv mediawiki-1.18.1 LabNotes.v1.18.2 cp LabNotes/LocalSettings.php LabNotes.v1.18.2 cp -r LabNotes.v1.13.2/extensions LabNotes cd LabNotes.v1.18.2/maintenance php update.php sudo mv LabNotes/upload LabNotes.v1.18.2 sudo mv LabNotes/zhangloupload LabNotes.v1.18.2 sudo mv probedesign ../LabNotes.v1.18.2/ sudo mv PeakPicker.jar ../LabNotes.v1.18.2/ sudo mv SeqScannerInstall.exe ../LabNotes.v1.18.2/ sudo mv RobertsLabNotesSupplement ../LabNotes.v1.18.2/ mv LabNotes LabNotes.v1.15.1 mv LabNotes.v1.18.2 LabNotes download the latest version of Sphinx extension: SphinxSearch-MW1.18-r92378.tar.gz Extract the tarball and copy all the files to LabNote/extension/SphinxSearch folder.
To rebuild the mysql database from mysql dump: /usr/local/mysql/bin/mysql -u wikiuser -pPASSWORD wikidb < wiki_db_current.sql
Update on 2/6/2013 I installed the latest version of mysql5 using MacPorts sudo /opt/local/bin/port install mysql5-server Note that the mysql files were installed at /opt/local/bin To start the MySQL server on launch, sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql5.plist
Update on 5/24/2017 I had to update Apache to v2.2.32 to deal with a security hole. However, that led to a cascade of issues that took me many attempts to figure out. Eventually I upgraded PHP to V5.6, upgraded Mysql server to v5.5 but then changed it back to V5.2. Then Mediawiki v1.18 -> v1.19.9 -> v1.20.2 -> v1.24.1 (This upgrade solved the problem of not seeing any page content.)
Meangenemachine[edit]
- Custom built: Dual Intel Xeon X5645 (12 cores), 32 GB RAM
- Booting from 500GB drive
- Drive space: 600GB_store, 1TB_store1, 1TB_store2, 2TB_store1, 2TB_store2
- Ubuntu 11.04
- For general use
- address: meangenemachine.dynamic.ucsd.edu
Genome-analyzer (old genome-miner)[edit]
- Below is information from the old genome-miner labwiki information, since it's the same computer
- MacPro (16 core, 23GB RAM, 3TB+6TB+6TB)
- MAC addresses:
- Port 1: 00:25:00:ee:6f:c8; connected to campus network
- IP=132.239.135.41
- Subnet=255.255.255.192
- Gateway=132.239.135.1
- DNS=127.0.1.1
- Port 1: 00:25:00:ee:6f:c8; connected to campus network
- MAC addresses:
- Installed ubuntu 16.04 LTS on 8/30/2016
- Can be accessed at <genome-analyzer.ucsd.edu>
Genemapster[edit]
- Custom built: Dual Intel Xeon X5520 (8 cores), 32 GB RAM
- Ubuntu 10.04.3 LTS
- For general use and for running GPU softwares
- address: genemapster.dynamic.ucsd.edu
Genome-miner[edit]
Current Data Storage[edit]
- Note 1:RO - Read only; RW - Read and write
- Note 2: Our plan right now is to have people migrate to using new volumes once every 3-4 years and making older volumes become long-term storage. As our drives are getting older, we need a better way to monitor them so that failed disks can be replaced as soon as possible in order to avoid volume crashes. Raid 5 have 4+1 redundancy, so each 12-bay box can handle up to 2 disk failures.
- Note 3: Home_Raid1 and Scratch_SSD are software raids.
Directory Name | Total HDD size | Raid | Read speed | Write speed | Permissions | Descriptions |
Home_Raid1 | 6TB | Raid 1 | TBD | TBD | RW | Home directories with mirroring |
Scratch_SSD | 4TB | Raid 0 | TBD | TBD | RW | Working directories, not good for long term storage |
12TB_ext | 12TB | Raid 0 | TBD | TBD | RW | Working directories, not good for long term storage |
NAS2_volume1 | 30TB | Raid 5 | TBD | TBD | RO | Confocal Store & long term storage |
NAS3_volume1 | 60TB | Raid 5 | TBD | TBD | RO | SeqStore2016 & FreshReads |
NAS3_volume2 | 60TB | Raid 5 | TBD | TBD | RW | Long term storage & working |
NAS1_volume1 | 30TB | Raid 5 | TBD | TBD | Not mounted | Long term storage & working |
NAS1_volume2 | 30TB | Raid 5 | TBD | TBD | Not mounted | Long term storage & working |
NAS2_volume2 | 60TB | Raid 5 | TBD | TBD | Not mounted | Unused Volume |
Summer 2016[edit]
- Summer 2016 - new Dell PowerEdge T630 server tower
- Chassis with up to 8, 3.5" Hard Drives, Software RAID, Tower Configuration
- Dual Intel® Xeon® E5-2697A v4 2.6GHz,40M Cache,9.60GT/s QPI,Turbo,HT,16C/32T (145W) Max Mem 2400MHz
- 2400MT/s RDIMMs
- 2x 32GB RDIMM, 2400MT/s, Dual Rank, x4 Data Width
- No RAID with Embedded SATA (1 SATA HDD or SATA SSD)
- Embedded SATA
- 1x 240GB Solid State Drive SATA Mix Use MLC 6Gbps 2.5in Hot-plug Drive,3.5in HYB CARR, SM863
- Single, Hot-plug Power Supply (1+0), 750W
- 3 Year Basic Hardware Warranty Repair, 5X10 HW-Only, 5x10 NBD On-site
- Installation notes here: File:Final configurations.sh
Dinh[edit]
- Re-installed Ubuntu in December 2014. See notes here.
Robert/Athurva Era[edit]
- Recently reinstalled using MacOS instead of Ubuntu; see notes here
- A MacPro (8 core, 16GB RAM, 4TB), dedicated for GA Pipeline (PFBH406)
- MAC addresses:
- Port 1: 00:25:00:ee:6f:c8; connected to the campus network;
- IP=132.239.135.41;
- Subnet: 255.255.0.0
- Gateway: 132.239.135.1
- DNS: 132.239.0.252, 128.54.16.2
- Port 2: 00:25:00:ee:3f:85; connected to GA PC;
- IP=192.168.137.11
- Port 1: 00:25:00:ee:6f:c8; connected to the campus network;
- MAC addresses:
- Installation of Ubuntu 9.04 (AMD64 Desktop).
- Installed rEFIt for dual booting (http://refit.sourceforge.net/).
- Installed Ubuntu 9.04 Desktop from ISO image.
- Turned out that all the hard drives were not recognized with the RAID card. Removed the RAID card.
- Press "c" to boot from the CD.
- The OS was install in the hard drive in Bay 4.
- The other two hard drives in Bay 2&3 were also reformatted to ext3
- Ubuntu booted up correctly, but without network connection. Turned out that the Ethernet card was too new.
- Downloaded the driver from sf.net/projects/e1000, manually changed the following two files, compiled and installed the driver.
Added the following to the netdev.c: {PCI_VDEVICE(INTEL, 0x10F6), board_82574} and to hw.h #define E1000_DEV_ID_82574L_NEW 0x10F6
- Installed GA_pipeline 1.4
- Install the Ubuntu Development Package:
apt-get install build-essential
- Installed fftw-3.2.1, need to compile with --enable-single;
- Installed gnuplot;
- Installed ImageMagik (for the convert function);
- Installed zlib, bzlib2;
- Install XML::Simple perl module.
- Run make, all prerequisites passed.
- Installed UNAFold (http://dinamelt.bioinfo.rpi.edu/) in /usr/local/
Apache Web Server[edit]
- Brandon said we'd better have a web server on Genome-miner since majority of the data were analyzed there.
- Make your own folder in /opt/lampp/htdocs/
- Put the files you want to share in your own folder
- Example: http://132.239.25.238/shg047/
- Usage: http://132.239.25.238/hello.pl
TSCC cluster[edit]
- UPDATE: Triton is expected to go down by the start of July 2013. Home files will be moved to the TSCC servers and /project will be remounted on the TSCC servers (warning: uid/gid might be changed on project drive!)
- INFO on data transfer and storage on TSCC: http://rci.ucsd.edu/computing/storage/data-transfer.html
- INFO on starting jobs on TSCC: http://rci.ucsd.edu/computing/jobs/index.html
- The Zhang lab current has four "home" nodes each having 16 cores and 64Gb of RAM. This is the best option when dealing with a large amount of data, such sequencing reads from one or more HiSeq runs. To use these nodes, you will need your own accounts at SDSC TSCC cluster (tscc-login.sdsc.edu). Once you logging in your account, you can create job files, and submit them to the queue:
[#PBS -A k4zhang-group] - this line may not need to be in job file to submit to home, condo, or glean. Then submit job to a specific queue with Submit to home queue: qsub -q home-k4zhang job_name Submit to condo queue: qsub -q condo job_name -W group_list=condo-group Submit to glean queue: qsub -q glean job_name -W group_list=condo-group Submit to hotel queue (pay-per-use): qsub -q hotel job_name Submit to pdafm queue (pay-per-use): qsub -q pdafm job_name
- Our home nodes IDs are: tscc-2-33, tscc-2-35, tscc-2-37, tscc-2-39
- The cheapest nodes for running jobs are: glean (runs free, no time limit), home-k4zhang (run on purchased time, no time limit), condo (run on purchased time, 8 hrs limit).
- The pay-as-you-go nodes (72 hours limit) for running jobs are: hotel (can specify GPU), and pdafm (max 512GB memory, specify less fewer processors/memory to get charged less)
- Node recommendations:
High priority jobs should be run with home-k4zhang or condo. When running home-k4zhang, we can only start a limited number of jobs. Also, run qstat to see that no one in our lab is using glean To start many high priority jobs which takes less than 8 hours to complete, use condo Intermediate to low priority can be run with glean, there is unlimited time but may get kicked off at any time. Run glean on our nodes to lower the risks of getting kicked off, use any of the following lines to specify which of our nodes to run on: #PBS -l nodes=tscc-2-33:ppn=X #PBS -l nodes=tscc-2-35:ppn=X #PBS -l nodes=tscc-2-37:ppn=X #PBS -l nodes=tscc-2-39:ppn=X (X = number of processors, if not specified, the entire node might be used and we can't start more than one jobs on each node.) The only times we would get kicked off glean would be when someone in our lab starts a job with home-k4zhang or when condo is at full capacity (very unlikely!)
- File storage and access:
Do not transfer large files with scp, install and use bbftp and start a job to transfer files. ( Large files for multiple uses can be transferred to /oasis (it have faster read/write than on /project or /home)! Small or large files for one-time use can be transferred to the path provided by the TMPDIR environment variable (this will copy the file to a local scratch space) then immediately processed. Once the job finishes, any file in this local scratch space will be lost. Always copy results back to our lab server to avoid losing them. Avoid as much as possible reading and writing to /project or /home. Use /oasis instead.
- Checking the output of a job while the job is still running:
- Use qstat -u [your_username_here] to get the job id: should look something like "XXXXXXX.tscc-mgr.local" where the 7 digits before ".tscc-mgr.local" are your job ID.
- Use checkjob [your_job_id_here] to get info about your job. The node your job is running on should be under "Allocated Nodes" in the format tscc-x-XX:YY where the two digits after the colon are the number of processors requested.
- ssh into the node your job is using "ssh tscc-x-XX"
- cd /var/spool/torque/spool
- There should be two files associated with your job id labeled something like "XXXXXXX.OU" and "XXXXXXX.ER". These are the stdout and stderr files, respectively and you can view them to check the output of your job. This is a convenient way to check job progress
- Installing perl modules locally
- Use cpan to install perl modules.
- First determine where you want your local Perl modules, ie /home/userID/perlmods.
- From within CPAN shell, enter the following (2) commands:
- cpan[1]> o conf mbuildpl_arg "--install_base /home/userID/perlmods"
- cpan[2]> o conf makepl_arg "PREFIX=/home/userID/perlmods"
- If you want to save these settings permanently, type 'o conf commit' (without the quotes). Otherwise you need to run the above commands everytime you start CPAN.
- Now to install a module:
- cpan[3] install Statistics::Descriptive
- Next, set the PERL5LIB path by adding the following lines to ~/.bash_profile or ~/.bashrc
- export PERL5LIB=/home/userID/perlmods:$PERL5LIB
- Finally source the .bash_profile or .bashrc so you could use it immediately.
- source ~/.bash_profile
Printers[edit]
- Brother HL-5250DN, PFBH406, attached to Vaio-Z540 through USB. IP=192.168.1.24.
- Used to be attached to ZhangLab-VGN through USB. IP=132.239.135.123. Before that, attached to ZhangLab-GelDoc through USB. IP=132.239.25.45
- Can access via smb://192.168.1.24/BHL5250DN on Macs.
- Brother HL-4150CN, PFBH402, network printer, IP=172.16.25.87.
- Brother MFC-7220, PFBH402, attached to Time Capsule through USB.
- Brother HL-L2340D, PFBH406, network printer, IP=192.168.1.22. Access through ZhangLab network.
[edit]
- ZhangLab-GelDoc, PC desktop, attached to the BioRad GelDocXL system.
- IP: 132.239.25.45.
- To map the shared data folder, use the network path: \\132.239.25.45\GelImages
- To map the shared data folder on mac, use the network smb: \\132.239.25.45\GelImages
- ZhangLab-Chromo4, PC desktop, attached to Biorad Chromo4 Thermalcycler.
- IP: 132.239.135.43.
- To map the shared data folder, use the network path: \\132.239.135.43\Users
- ZhangLab-VGN, PC laptop.
- ZhangLab-IX81, the WinXP workstation attached to the Olympus IX81 microscope. \\172.16.25.86 MAC address: 00-25-64-A8-9D-1C, the password is "GenomeClub".
Biogem Data[edit]
- To obtain data from Biogem, you must have sudo privileges.
- On genome-miner, issue the command:
sudo mount /media/Biogem
- To unmount the drive once you are done, type:
sudo umount /media/Biogem
RAID Enclosures[edit]
- Buffalo DriveStation Quattro™ - HD-QSSU2/R5
- Ext4T. Maximum capacity is 8TB (4x2TB)
- Other World Computing (OWC) Mercury Elite Pro Qx2 RAID
- Ext12T
Mounting the RAID Enclosures[edit]
- Athurva's notes (October 29th, 2014):
- The particular version of Linux on genome-miner sometimes has trouble scanning large drives with mount. You can use gparted to scan the drives, and then the mount command will work.
- If you launch gparted, go to the troublesome drive, right click, choose manage flags, uncheck and then recheck the raid flag, and then hit ok, the drive will get rescanned and mount commands will then work from the command line.
Synology NAS - Diskstation[edit]
Synology NAS on genome-miner.ucsd.edu:
Path | Capacity | RAID level | Model | MasterOrSlave | Location |
---|---|---|---|---|---|
/media/NAS1_volume1 | 30 Tb | RAID 5 | DS2411+ | Master | PBFH 447 |
/media/NAS1_volume2 | 30 Tb | RAID 5 | DX1211 | Slave | PBFH 447 |
/media/NAS2_volume1 | 30 Tb | RAID 5 | DS2413+ | Master | PFBH 036 |
/media/NAS2_volume2 | 60 Tb | RAID 5 | DX1215 | Slave | PFBH 036 |
/media/NAS3_volume1 | 60 Tb | RAID 5 | DS2415+ | Master | PBFH 447 |
/media/NAS3_volume2 | 60 Tb | RAID 5 | DX1215 | Slave | PBFH 447 |
Install Notes[edit]
- Installed Synology Diskstation with 6 Hitachi 3 TB Drives
- Part numbers: 0S03230 (Bays 1-3) and 0F12460 (Bays 4-6)
- Seems to be the same drive, but from different production cycles?
- Diskstation recognizes these as the same drive model
- Installed Synology DSM Software using included disk
- Currently formatting all six drives as a RAID5 - ~15 Tb storage space
- Decided to perform full disk check to make sure we do not have any failures
Administering the NAS[edit]
- Visit its IP Address (currently 192.168.1.22:5000, you must be on the ZhangLab wireless router) with any web browser
- Username is admin, password is Zhang lab's common password
- Can control disks, file sharing, and more from web interface
Accessing the NAS[edit]
- October 29, 2014:
- Can log onto the Synology via 192.168.137.5:5000 on genome-miner using zhanglab and the Zhang lab's common password.
- Athurva's command example to mount Synology drives on genome-miner:
sudo mount.cifs \\\\192.168.137.5\\Syn_15T /media/Syn_15T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm sudo mount.cifs \\\\192.168.137.5\\LTS_15T /media/LTS_15T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm sudo mount.cifs \\\\192.168.137.5\\LTS_33T /media/LTS_33T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm
- Athurva's mounts:
//192.168.137.5/LTS_15T/ ... ... ... 68% /media/LTS_15T //192.168.137.5/LTS_33T/ ... ... ... 69% /media/LTS_33T //192.168.137.5/Syn15T/ ... ... ... 23% /media/Syn_15T
Installation Plans[edit]
Step 1[edit]
- Copy SeqStore2 and other contents of Ext9T and Ext6T to 15 Tb partition
- Erase Ext9T and Ext6T
- Move the 3 functional 3TB drives from Ext9T to Diskstation
- Set them up as either individual drives or RAID0 most likely (Scratch Space)
- Load a backup 2TB drive into Ext9T
- Re-setup each as either RAID5 or RAID0
Step 2[edit]
- Dr. Zhang has ordered 5 more 3TB hard drives - load 3 of them into Diskstation
- Set them up as either individual drives or RAID1? (Use for more long-term storage
Step 3[edit]
- Next, temporarily back up Ext4T to Diskstation
- Erase Ext4T, remove the two drives
- Load two 3TB drives into Ext4T, set up as a 6 TB RAID0
- Move data back from Diskstation to Ext4T
Results[edit]
- This will leave us with:
- Synology Diskstation: 15 TB RAID5, 18 TB individual disks, joined, or RAID0
- Ext9T: 6-8 TB RAID5 or RAID0
- Ext6T: 6-8 TB RAID5 or RAID0
- Ext4T: 6 TB RAID0
- Internal Storage on Genome-Miner: 8 TB in 4 disks
- This will leave us with 3 2TB hard drives left over for backups.
- Should order more Hitachi drives of this model; can use them as hotswaps when RAID fails
Synology NAS DiskStation (NAS2 in the microscope room)[edit]
Install Notes[edit]
- Model Name: DS2413+
- Installed at 02/01/2013
- Location : PFBH 038 Confocal room
Network Information[edit]
- Updated: 1/7/2016 - Due to volume crash in 12/2015 had to change IP address for Synology to remote access
- MAC address
- 00:11:32:19:59:EF
- 00:11:32:19:59:F0
- IP Address
- 132.239.25.19
- gateway 132.239.25.1
- Subnetmask 255.255.255.0
- nameserver 132.239.0.252 (not sure about this)
- Server Name: ZhangLabNAS2
- Access
- bioeng-25-19:5000 (or http://132.239.25.19:5000) in any web browser (The given machine name of primary IP address from UCSD hostmaster is "bioeng-25-19")
- Username is admin, password is Zhang lab's common password (capital G and C)
- 2 failed login attempts within 10 min will lock you out for a day
HDD Install and Volume management[edit]
- 12 HDD have been installed (3TB each, Toshiba, total 36TB, --> 30TB available + 6TB backup).
- Created volume as RAID5
Synology NAS DiskStation (NAS3 in sequencing room)[edit]
Install Notes[edit]
- Model Name: DS2415+, 12 X 6TB drives
- Installed at 10/17/2015
- Location : Gene Sequencing Room, connected to NetGear Switch
- Drive mount command:
sudo mount.cifs \\\\192.168.137.6\\LTS_60T /media/LTS_60T -o username=ZhangLab,dir_mode=0775,uid=ajgore,gid=sambashare,noperm
- Installation guide:
- (1) Plug all the drives in.
- (2) Connect at least one ethernet connection from new NAS to NetGear switch, and second ethernet connection from NetGeat switch to a laptop. (Probably can't use Genome-miner unless we can disconnect Genome-miner from the wall jack). For the second ethernet connection, borrow the one connected to the GA IIx PC.
- (3) Configure the network IPv4 to the same IP range and subnet. I used 192.168.137.10 and 255.255.255.0
- (4) Download and install Synology Assistant. If you haven't already, download the .pat file for installing the latest DSM on the new Synology. Turn off wireless networking.
- (5) Run Synology Assistant, should be able to see both the DiskStation and new Synology NAS
- (6) Install and configure the network connection for the new NAS:
- manually set the same IP range as the others, and subnet 255.255.255.0
- (7) Once the new NAS is completely installed, log onto the NAS and set up a new Share Folder. Set up ZhangLab user account. Make sure guest account is disabled. Configure other settings to allow SSH, NFTP, file sharing, etc.
Network Information[edit]
- IP Address
- 192.168.137.6
- 192.168.137.7
- Default gateway: 192.168.137.1
- Subnetmask: 255.255.255.0
- DNS server: 192.168.137.1
- Server Name: ZhangLabNAS3
- Access
- Only via the NetGear switch and must be in the same IP range.
- Username is admin, password is Zhang lab's common password