Tag Archives: Xserve

Ubuntu – Fix “No Video Signal” Issue on Emu

An issue with Emu cropped up a few weeks ago that was seemingly caused by upgrading from Ubuntu 16.04 to 18.04.

However, the problems only seemed related to using Emu via the GUI; users could still use Emu as a headless computer via SSH.

Today, I was upgrading some packages and noticed two things:

  1. When initially logging in to Emu.
    
    sam@swoose:~$ ssh emu
    Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-57-generic x86_64)
    
    * Documentation:  https://help.ubuntu.com
    * Management:     https://landscape.canonical.com
    * Support:        https://ubuntu.com/advantage
    
    0 packages can be updated.
    0 updates are security updates.
    
    New release '18.04.1 LTS' available.
    Run 'do-release-upgrade' to upgrade to it.
    
    You have mail.
    Last login: Tue Oct  2 07:30:32 2018 from 128.95.149.75
    

    This is showing that Emu is still running Ubuntu 16.04, not 18.04 as presumed!

  2. An error in the GRUB config generation process when upgrading packages.


run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.4.0-134-generic /boot/vmlinuz-4.4.0-134-generic
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-137-generic
Found initrd image: /boot/initrd.img-4.4.0-137-generic
Found linux image: /boot/vmlinuz-4.4.0-135-generic
Found initrd image: /boot/initrd.img-4.4.0-135-generic
Found linux image: /boot/vmlinuz-4.4.0-57-generic
Found initrd image: /boot/initrd.img-4.4.0-57-generic
Found linux image: /boot/vmlinuz-4.4.0-53-generic
Found initrd image: /boot/initrd.img-4.4.0-53-generic
error: syntax error.
error: Incorrect command.
error: syntax error.
Syntax error at line 98
Syntax errors are detected in generated GRUB config file.
Ensure that there are no errors in /etc/default/grub
and /etc/grub.d/* files or please file a bug report with
/boot/grub/grub.cfg.new file attached.
done
Processing triggers for libc-bin (2.23-0ubuntu10) ...

These two bits of information led me to believe the problem wasn’t that the system upgrade to 18.04 was incompatible with these old Apple Xserve hardware (since the upgrade didn’t actually get implemented) and instead was that the upgrade might have been initiated, but aborted, which modified the GRUB configuration file(s), breaking the GUI; much like the problem I previously addressed earlier this summer.

When I fixed the display/GUI issues with Emu and Roadrunner earlier this summer, I noted that the /etc/default/grub files on each of the computers were slightly different, despite the fact that these two computers should be identical. So, I replaced the /etc/default/grub file on Emu with the file from Roadrunner and rebooted Emu.

Contents of /etc/default/grub file on Emu/Roadrunner, for future reference:


# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Voila! Emu now has a functional display/GUI again!

Sequencing Data Analysis – C.virginica Oil Spill MBDseq Concatenation & FastQC

Per Steven’s request, I concatenated our Crassostrea virginica LSU oil spill MBDseq sequencing data and ran FastQC on the concatenated files.

Here’s the list of input files:

2112_lane1_ACAGTG_L001_R1_001.fastq.gz
2112_lane1_ACAGTG_L001_R1_002.fastq.gz
2112_lane1_ATCACG_L001_R1_001.fastq.gz
2112_lane1_ATCACG_L001_R1_002.fastq.gz
2112_lane1_ATCACG_L001_R1_003.fastq.gz
2112_lane1_CAGATC_L001_R1_001.fastq.gz
2112_lane1_CAGATC_L001_R1_002.fastq.gz
2112_lane1_CAGATC_L001_R1_003.fastq.gz
2112_lane1_GCCAAT_L001_R1_001.fastq.gz
2112_lane1_GCCAAT_L001_R1_002.fastq.gz
2112_lane1_TGACCA_L001_R1_001.fastq.gz
2112_lane1_TTAGGC_L001_R1_001.fastq.gz
2112_lane1_TTAGGC_L001_R1_002.fastq.gz

All commands were run on roadrunner (Apple Xserve; Ubuntu 16.04). See Jupyter notebook below for details.

Jupyter notebook (GitHub):


RESULTS:

The concatenated gzip files and FastQC/MultiQC files are in the output folder linked below.

Output folder:

MultiQC report (HTML):

Ubuntu – Fix “No Video Signal” Issue on Emu/Roadrunner

Both Apple Xserves (Emu/Roadrunner) running Ubuntu (16.04LTS) experienced the same issue – the monitor would indicate “No Video Signal”, would go dark, and wasn’t responsive to keyboard/mouse movements. However, you could ssh into both machines w/o issue.

Although having these machines be “headless” (i.e. with no display) is usually fine, it’s not ideal for a couple of reasons:

  1. Difficult to use for other lab members who aren’t as familiar with SSH – specifically if they would want to use a Jupyter Notebook remotely (this would require setting up a tunnel to their own computer).

  2. Can’t use Remmina Remote Desktop until a user has physically logged in from the Ubuntu login screen at least once, in order to launch Remmina.

The second aspect was the major impetus in me finally being motivated to deal with this. Accessing these computers via remote desktop is much easier to manage long-running Jupyter Notebooks instead of relying on an SSH tunnel. The tunnel greatly limits my access to the Jupyter Notebook outside of the computer that has the tunnel set up.

Well, this led me down a horrible rabbit hole of Linux stuff that I won’t get fully in to (particularly, since I didn’t understand most of it and can’t remember all the crazy stuff I read/tried).

However, here’s the gist:

  1. Needed to edit /etc/default/grub

  2. After editing, needed to update grub config file: sudo update-grub

Despite the fact that both machines are (or, should be) identical, I did not get the same results. The edits I made to the /etc/default/grub file on Emu worked immediately. The edits were:

  1. Add nomodeset to this (this is the edited line) line (this seemed to be the most common suggestion for fixing the “No Video Signal” issue):

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"

  1. Comment out this line (this line was triggering an error/warning about writing the config file when running the update-grub command):

#GRUB_HIDDEN_TIMEOUT=0

For some reason, Roadrunner did not take kindly to those changes and it took a long time to resolve, ending with changing permissions on ~/.Xauthority back to their original permissions (they got altered when I ran some command – sudo startx or something) to get out of a login loop.

Regardless, both are fixed, both can be used when physically sitting at the computer, and both can be accessed remotely using Remmina!

Ubuntu Installation – Convert Apple Xserve “bigfish” to Ubuntu

Due to hardware limitations on the Apple Xserves we have, we can’t use drives >2TB in size. “Bigfish” was set up to be RAID’d and, as such, has three existing HDDs installed.

We wanted to upgrade the HDD size and convert over to Linux (Ubuntu) so that we could utilize the Linux operating system for some of our bioinformatics programs that won’t run on OSX.

I installed Ubuntu 16.04LTS to the SSD boot drive (128GB) and installed three, 2TB HDDs. However, it cannot detect the HDDs due to the Apple hardware RAID controller! Searching the internet has revealed that this is a commonly encountered issue with RAID’d Apple Xserves and Linux installs.

I haven’t come across a means by which to remedy this. Will likely have to install an OS X version in order to make this computer usable. Although, that won’t limit us too terribly in regards to program usage. Most programs will run fine on OSX.

Computer Management – Additional Configurations for Reformatted Xserves

Sean got the remaining Xserves configured to run independently from the master node of the cluster they belonged to and installed OS X 10.11 (El Capitan).

The new computer names are Ostrich (formerly node004) and Emu (formerly node002).

 

He enabled remote screen sharing and remote access for them.

Sean also installed a working hard drive on Roadrunner and got that back up and running.

I went through this morning and configured the computers with some other changes (some for my user account, others for the entire computer):

  • Renamed computers to reflect just the corresponding bird name (hostnames had been labeled as “bird name’s Xserve”)

  • Created srlab user accounts

  • Changed srlab user accounts to Standard instead of Administrative

  • Created steven user account

  • Turned on Firewalls

  • Granted remote login access to all users (instead of just Administrators)

  • Installed Docker Toolbox

  • Changed power settings to start automatically after power failure

  • Added computer name to login screen via Terminal:

sudo defaults write /Library/Preferences/com.ap\ple.loginwindow LoginwindowText "TEXT GOES HERE"
  • Changed computer HostName via Terminal so that Terminal displays computer name:
sudo scutil --set HostName "TEXT GOES HERE"
  • Installed Mac Homebrew (I don’t know if installation of Homebrew is “global” – i.e. installs for all users)

  • Used Mac Homebrew to install wget

  • Used Mac Homebrew to install tmux

Computing – Amazon EC2 Cost “Analysis”

I recently moved some computing jobs over to Amazon’s Elastic Cloud Computing (EC2) in attempt to avoid some odd computing issues/errors I kept encountering on our lab computers (Apple Xserve 3,1).

The big trade off here is that the lab computers are paid for and using EC2 means we’ll be sinking more money into computing resources. With that expense should come faster processing (i.e. less time) to perform various analyses. As they say, time is money…

Let’s look at how things’ve worked out so far.

 

First, how much did we spend and how did we spend it (click on the image to enlarge)?

 

Of course, it’s easy to see that for the instance I was running, it cost us $0.419/hr. That’s great and all, but you sort of lose sense of what that ends up costing over the long-term. Let’s look at how things break out over a larger time scale.

According to Amazon’s (very useful!) billing breakdown, we spent $187 in the month of July 2016. This doesn’t seem too bad. In fact, this would only cost us ~$2200/yr if we continue to run this instance in this fashion. However, let’s look at it a bit further.

We see that the instance ran for a total of 374 hrs during July 2016. Divide that by 24hrs/day and we see that the instance was running for 15.6 days; just over half the month. That means we would’ve spent ~$374 for the full month, which would equate to $4488/yr. For our lab, that kind of money starts to add up and one starts to wonder if it wouldn’t be better to invest in higher end hardware to use in the lab with a single “sunk” cost that will last us many, many years.

Regardless, with the lab’s current computing hardware, we should compare another factor that’s involved with the expense of using Amazon EC2 instead of our lab computers: time.

I performed a very rough “guestimation” of the time savings that EC2 has provided us.

 

I compared the length of “real” time for the first step in the PyRad program using the same data set on one of our lab computers (roadrunner) and the Amazon EC2 instance:

  • roadrunner: 1118 minutes

  • EC2: 771 minutes

 

Roadrunner is nearly 1.5x slower than the EC2 instance! To really appreciate what type of impact that has, we should look at the run time for the full PyRad analysis:

  • roadrunner: 5546 minutes (NOTE: Due to incomplete analysis, roadrunner time is “guestimated” as 1.45 x EC2; see below)

  • EC2: 3825 minutes

 

Let’s convert those numbers into something more easily understood – hours and days:

  • roadrunner: 95hrs

  • roadrunner: ~4 days

  • EC2: 63hrs

  • EC2: ~2.6 days

 

Of course, these times don’t take into account any technical issues that we might encounter (and I have encountered many technical issues using roadrunner) on either platform, but I can tell you that I’ve not had any headaches using EC2 (other than unintentional, self-imposed ones).

 

Another potential option is trying out InsideDNA. They offer cloud computing services that are specifically geared towards high-throughput bioinformatics analysis. They have many, many bioinformatics tools already installed and available to use on their platform. Additionally, they have nice tutorials on how to use some of these tools, which goes a long ways in getting started on any analyses using new software. Here are the various pricing tiers that they offer:

 

 

 

The “Advanced” tier ($100/month) certainly seems like it could potentially be better than using Amazon. However, this tier only offers 500GB of storage. If you look up above at the Amazon pricing breakdown, you’ll notice that I’ve already used 466GB of storage for just that one experiment! Additionally, the 1000 CPU hours seems great, but remember, this is likely divided by the number of CPUs that you end up using. The Amazon EC2 instance was running eight cores. If I were to run a similar set up on InsideDNA, that would amount to 125 CPU hours per core. Again, looking up above, we see that I ran the EC2 instance for 374 hours! That means the “Advanced” tier on InsideDNA wouldn’t be enough to get our jobs done.

 

Anyway, in the grand scheme of things, using an Amazon EC2 instance periodically as we need it throughout the year isn’t terrible. However, if we start using the University of Washington Hyak computing cluster we may be able to avoid spending on EC2 and be able to have similar time savings (compared to using the lab computers). Need to get cracking on that…

Docker – VirtualBox Defaults on OS X

I noticed a discrepancy between what system info is detected natively on Roadrunner (Apple Xserve) and what was being shown when I started a Docker container.

Here’s what Roadrunner’s system info looks like outside of a Docker container:

 

However, here’s what is seen when running a Docker container:

 

 

It’s important to notice the that the Docker container is only seeing 2 CPUs. Ideally, the Docker container would see that this system has 8 cores available. By default, however, it does not. In order to remedy this, the user has to adjust settings in VirtualBox. VirtualBox is a virtual machine thingy that gets installed with the Docker Toolbox for OS X. Apparently, Docker runs within VirtualBox, but this is not really transparent to a beginner Docker user on OS X.

To change the way VirtualBox (and, in turn, Docker) can access the full system hardware, you must launch the VirtualBox application (if you installed Docker using Docker Toolbox, you should be able to find this in your Applications folder). Once you’ve launched VirtualBox, you’ll have to turn off the virtual machine that’s currently running. Once that’s been accomplished, you can make changes and then restart the virtual machine.

 

Shutdown VirtualBox machine before you can make changes:

 

Here are the default CPU settings that VirtualBox is using:

 

 

Maxed out the CPU slider:

 

 

 

Here are the default RAM settings that VirtualBox is using:

 

 

 

Changed RAM slider to 24GB:

 

 

 

Now, let’s see what the Docker container reports for system info after making these changes:

 

Looking at the CPUs now, we see it has 8 listed (as opposed to only 2 initially). I think this means that Docker now has full access to the hardware on this machine.

This situation is a weird shortcoming of Docker (and/or VirtualBox). Additionally, I think this issue might only exist on the OS X and Windows versions of Docker, since they require the installation of the Docker Toolbox (which installs VirtualBox). I don’t think Linux installations suffer from this issue.

Computer Setup – Cluster Node003 Conversion

Here’s an overview of some of the struggles getting node003 converted/upgraded to function as an independent computer (as opposed to a slave node in the Apple computer cluster).

  • 6TB HDD
  • Only 2.2TB recognized when connected to Hummingbird via Firewire – internet suggests that is max for Xserve; USB might recognize full drive) – Hummingbird is a converted Xserve running Mavericks
  • Reformatted on different Mac and full drive size recognized
  • Connected to Hummingbird (via USB) and full 6TB recognized
  • Connected to Mac Mini to install OS X
  • Tried installing OS X 10.8.5 (Mountain Lion) via CMD+r at boot, but failed partway through installation
  • Tried and couldn’t reformat drive through CMD+r at boot with Disk Utility
  • Broken partition tables identified on Linux, used GParted to establish partition table, back to Mac Mini and OS X (Mountain Lion) install worked
  • Upgraded to OS X 10.11.5 (El Capitan)
  • Inserted drive to Mac cluster node003 – wouldn’t boot all the way – Apple icon, progress bar > Do Not Enter symbol
  • Removed drive, put original back in, connected 6TB HDD via USB, but booting from USB not an option (when booting and holding Option key)
  • Probably due to node003 being part of cluster – reformatted original node003 drive with clean install of OS X Server.
  • Booting from USB now an option and worked with 6TB HDD!
  • Put 6TB HDD w/El Capitan in internal sled and won’t boot! Apple icon, progress bar > Do Not Enter symbol
  • Installed OS X 10.11.5 (El Capitan) on old 1TB drive and inserted into node003 – worked perfectly!
  • Will just use 1TB boot drive and figure out another use for 6TB HDD
  • Renamed node003 to roadrunner
  • Current plan is to upgrade from 12GB to 48GB of RAM and then automate moving data off this drive to long-term storage on Owl (Synology server).