Gogo:Tronics - Parts and Supplies For Electronic Enthusiasts
Your Cart

Gogo:Tronics

Getting Ubuntu 14.04 Running on an ASUS Z270M-PLUS Motherboard (Intel Z270) Without Upgrading to 16.04, Including VMWare Workstation 10 Patch for 14.04 LTS HWE

A really long title, which could probably be shortened to  “here’s how I wasted the last 3 FREAKING DAYS”.

So, old computer hardware was showing it’s age, on it’s last legs, and being nursed along.  I decided it was time for an upgrade of the core components, ram, mobo, processor, SSD root drive and a new case, I already had a new power supply “in stock”.

Anyway, the important point for this article is that the new motherboard was an ASUS Prime Z270M-PLUS, running the Interl Z270 chipset, and the process or a Core i5-6600 Skylake

This was on my (please note, single boot, nothing Windows on it outside of vmware machines) workstation system, so I didn’t want to really do a fresh from scratch install of everything, that would have been painful (a lot more painful than this new keyboard I’m typing on, because of slight differences in key positions, but I’ll get used to that) trying to get everything running again and setup how I want it (of course, I would bring my entire /home with me, but all the system configuration, printers (multiple!), getting things like VMWare setup again and so forth).

My plan was to bring up the new system on a live boot USB key, rsync across my system drive on the old system (over network) to a fresh new SSD (my /home being on a separate drive) and then install grub, make sure it booted up ok, and then shut down the old system, detach the home drive, attach it to the new system, wipe hands on pants, job done.

Rather than make a long story, I’ll try and just summarise what I did wrong that took me 3 days, and what I did right/fixed it.

Make sure your existing system is updated

Seriously, I didn’t, and it bit me, so do.  Make sure that you have the security repository added, I didn’t for some reason and it caused problems later on.  Here are the enabled repositories in /etc/apt/sources.list

    deb http://archive.ubuntu.com/ubuntu/ trusty main universe restricted multiverse
    deb-src http://archive.ubuntu.com/ubuntu/ trusty main universe restricted multiverse #Added by software-properties
    deb http://archive.ubuntu.com/ubuntu/ trusty-updates universe main multiverse restricted
    deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe main multiverse restricted
    deb http://archive.ubuntu.com/ubuntu/ trusty-backports universe main multiverse restricted
    deb http://security.ubuntu.com/ubuntu/ trusty-security multiverse restricted universe main

Then 

    sudo apt-get update  && sudo apt-get dist-upgrade

Make sure you resume and then shut down any suspended VMWare (or other Virtual) Machines fully.

Because you are changing hardware, it’s probably going to cause problems if you try and resume a suspended VMWare machine on different hardware (it did for me).   

Make sure you have an USB network connection you can use

Even if it’s just tethering your phone, there is a good chance that your new system won’t be able to access the built in network device from within your 14.04 operating system until you have enabled HWE (in short, official backported linux kernel and X drivers from 16.04), and you don’t want to do that on the existing working system if it’s not already (don’t fix what’s not broke, right).

So yes, to save headaches, have a USB wifi dongle handy that you know works on your existing system.

Creating a Live USB stick

Ok, I arsed about for a LONG time trying to get this to work, because I thought it was way more complicated than it is, I got lost in a mire of boot disk creators.

Here’s the thing, with UEFI, making a live boot USB stick does not require anything more than a manner to get at the contents of the ISO

  cd /tmp
  mkdir iso-contents 
  sudo mount -o loop [path-to]/kubuntu-16.04.1-desktop-amd64.iso /tmp/iso-contents

now just copy everything in there onto your clean blank flash drive.

  cp -a /tmp/iso-contents/. /path/to/your/usb-key/

That’s IT, that’s all you have to do.  You do not need unetbootin, or tuxboot, or boot disk creator or anything if you will be using UEFI to boot instead of  “bios”, and you will be.

Make it persistent

It’s very worthwhile to make your live boot persistent, you will regret if you don’t, although it does slow it down a LOT.

cd /path/to/your/usb-key/
dd if=/dev/zero of=casper-rw bs=1M count=1000
mkfs.ext3 -F casper-rw

count must be less than 4096 and must be less than the free space (in M) on the usb key.

Then edit the .cfg files (grub.cfg and loopback.cfg, I’m not sure if both are used or what, I just edited bot) in /path/to/your/usb-key/ to add option  “persistent” to the commands (ie, where it says  “quiet splash” add in “persistent”)

While you are editing those two files, also copy the  menuentry block for  “Start Kubuntu” and make a duplicate  “Start Kubuntu nomodeset” and add the nodemodeset option to that command.

Booting your live usb

Here again I fell into a trap, after I booted to grub I got a black screen, I thought it was perhaps secure boot and went to a lot of trouble figuring out how to disable it (set an admin password in the bios, then delete all the secure boot keys, don’t worry you can reset them if you need to later), then I thought it was various other things and tweaked lots of stuff.

What was it?  I needed nomodeset, that’s why I suggest adding a nomodeset boot option above when you create your key.

So, power on with the key in, mash on F8 when the ASUS logo shows and you’ll get a boot menu, now select the option to boot the USB key (UEFI), choose your nomodeset option at the grub screen and wait.

I wasted so much time getting to this point, don’t overcomplicate your life.

Screen resolution in the live booted environment

For me the live boot only gets up in 1024x768.  Don’t care about it, I mucked about trying to get the live boot to work in higher resolutions, I failed, I don’t know what it is, and I don’t care, neither should you, it’s only for getting you back up and running.

Install boot-repair on the live boot

boot-repair is what will get your grub setup on the new drive, so in your live booted environment

    sudo add-apt-repository ppa:yannubuntu/boot-repair
    sudo apt-get update
    sudo apt-get install boot-repair 

since you setup a persistence file, this will stick around now.

Partition your new system drive

The plan is to copy across your system drive from the old system, to a new system drive for the new system

Partition your new drive in the live booted system.  This is where I fell into a trap because I didn’t really understand what all this new fangled (10+ year old) UEFI was about, I’ll spare the gorey details and get straight to the facts.

You want a GPT partition table (if you get the option). 

You need two partitions, one FAT32, small one, one your regular desired filesystem (I use ext4).

The order of the partitions doesn’t matter. 

Mark the small one (I don’t know how big it really needs to be, I saw people say “like 200M” and that;s what I made, but it only actually has about 10M of data on it, maybe if you dual boot you need a lot more) as bootable.

Mount and rsync your system drive

Ok, so now mount your new system drive in the live system, and rsync across the old system drive to the new system drive.

If you are doing this locally, that is you’ve got your old system drive and new system drive connected to the live booted system then

  sudo mkdir /tmp/old-sys
  sudo mkdir /tmp/new-sys 
  sudo mount /dev/[oldsystemdevice]  /tmp/old-sys
  sudo mount /dev/[newsystemdevice]  /tmp/new-sys       
  sudo rsync -axHAXP --delete --numeric-ids \
     --exclude /tmp/ \
     --exclude /home \
      --exclude /anything-else-you-do-not-want \
      /tmp/old-sys/ /tmp/new-sys/

Take good note of that --numeric-ids option, you need it (believe me, another hour wasted!)

If you are doing this remotely, that is your old/existing system is still running on the network and you want to transfer the files over the network to your new system drive

  sudo mkdir /tmp/new-sys 
  sudo mount /dev/[newsystemdevice]  /tmp/new-sys     
  sudo rsync -axHAXP --delete --numeric-ids \
     --exclude /tmp/ \
     --exclude /home \
     --exclude /anything-else-you-do-not-want \
     -e 'ssh -X' --rsync-path='SUDO_ASKPASS=/usr/bin/ssh-askpass sudo -A rsync' [username]@[old system]:/ /tmp/new-sys/

you’ll be prompted for your ssh password as normal, and then for the sudo password (probably the same) via a dialog prompt.

If doing it remotely, it will not appear to “finish”!  It’s something to do with ssh not hanging up when the remote end rsync is done, it just sits there and so does the local end rsync.  Anyway, watch rsync on the OLD system (ps ax | grep rsync) and see when it finishes, then you can just CTRL-C in the local end.

Adjust the new system files

You will want to make some adjustments for things that the new system doesn’t have (yet).

    adjust /tmp/new-sys/etc/network/interfaces to disable any configuration you have for eth0 except for auto, if your network config is bad, it won’t work well

    adjust /tmp/new-sys/etc/fstab to disable drives that won't be connected yet

    if you have /home on a separate drive/partition and are not bringing it across just yet, you will want to setup a dummy /home for you on the new system

    if you have moved mysql data somewhere that won’t be available, you’ll similarly want to copy at least a minimal set of the mysql databases so it can startup

Run boot-repair

In theory, you should be ready to run boot-repair, this easy to use (in theory) software repairs the grub data and sets up your new EFI boot partition.

If you have both new and old drives connected to the live boot, shut down and disconnect the old drive, put it aside.  Reboot the live environment and run boot-repair

If you have only the new drive connected to the live boot, just unmount it and then run boot-repair.

For me, boot-repair failed.  After a couple hours digging, various misdirections and unnecessary works, I narrowed it down to needing to do the following due to some dependency issue in the grub packages in 14.04 repositories.  Mount the new system if not already, boot repair leaves it mounted at /mnt/boot[something]/[device], for my purposes that was sda1, adjust the below for your purposes

    cd /mnt/boot[something]
    mount --bind /proc sda1/proc && mount --bind /dev sda1/dev && mount --bind /sys sda1/sys
    chroot sda1
    cd tmp
    wget http://launchpadlibrarian.net/276306643/grub-common_2.02~beta2-9ubuntu1.12_amd64.deb
    wget http://launchpadlibrarian.net/276306644/grub2-common_2.02~beta2-9ubuntu1.12_amd64.deb
    dpkg -i grub-common_2.02~beta2-9ubuntu1.12_amd64.deb
    dpkg -i grub2-common_2.02~beta2-9ubuntu1.12_amd64.deb
    exit
    umount sda1/dev && umount sda1/proc && umount sda1/sys

 so you can see what we are doing there, manually install into the new system drive by using a chroot environment a couple of packages.  Easy when you know what the problem was, it’s a shame it too so much faffing about to work out what the problem was!

Anyway after that fix, boot-repair worked, well it tells you to do a few things and you have to go through the process twice (the first time through it asks you to issue an apt-get -f install command to fix some issues first).

Boot the new system

Finally the time has arrived, reboot, remove the USB key, and hopefully the newly installed grub on the new system drive will work and you’ll boot into the new system drive.  The resolution will probably be rubbish and the network not working, that’s fine.

Switch to a console (CTRL-ALT-F1) and login as your regular user. See if you have network connectivity?  If yes, good.  If no, plug in that backup USB connected network device I told you would probably need (if you don’t have a backup usb connected network device, maybe you can boot into the live key again, mount the new drive, bind mount proc, sys and dev into it, chroot into it and then do the below apt commands, but I have not tried it)

Now we want to install the  “Hardware Enablement Stack” (HWE), this is a set of packages from Ubuntu which bring in the new kernel from Ubuntu 16.04 and slot it into 14.04 so that you can use more recent hardware, such as the chipset of your brand spanking new motherboard. 

In short, I use i386 and amd64, so I ran:

sudo apt-get install --install-recommends linux-generic-lts-xenial xserver-xorg-core-lts-xenial xserver-xorg-lts-xenial xserver-xorg-video-all-lts-xenial xserver-xorg-input-all-lts-xenial libwayland-egl1-mesa-lts-xenial libgl1-mesa-glx-lts-xenial libgl1-mesa-glx-lts-xenial:i386 libglapi-mesa-lts-xenial:i386 

If you don’t use i386 binaries as well, you probably only need

sudo apt-get install --install-recommends linux-generic-lts-xenial xserver-xorg-core-lts-xenial xserver-xorg-lts-xenial xserver-xorg-video-all-lts-xenial xserver-xorg-input-all-lts-xenial libwayland-egl1-mesa-lts-xenial 

Reboot

We are almost done, reboot.  Now all things being equal you should have a full resolution X desktop and networking without needing your USB dongle.

Fix VMWare Workstation 10 to work with the Ubuntu 16.04/14.04 HWE Linux Kernel 4.4.0 

I’ll save you the gorey details and lots of google searching, short version the upgrade to 14.04 HWE broke a bunch of the VMWare modules (vmmon, vmnet and vmci) in Workstation 10 and required some patching to get them working.

Download this file, extract it and move the source directory to replace /usr/lib/vmware/modules/source

Now VMWare workstation 10 should be able to compile them and run like it always did.

If you have a boot-loop problem, or complaints about “long mode”, assuming your new system is Intel based, check the bios to ensure that VT (Virtualization Technology) is enabled in the CPU Settings (found in Advanced Mode of the bios).

Also you will want to add  

mks.gl.allowBlacklistedDrivers = "TRUE"

to your ~/.vmware/preferences file if it is not already there as it seems the Intel video driver is blacklisted by VMWare by default (”may cause problems” blah blah) causing you to not have any 3D Acceleration.

Finish up

Ok, we we are pretty much back in business now, all that remains is to reattach your other drives (eg /home if that was on a different drive) and configure your network, so edit /etc/networking/interfaces if you need to, and edit /etc/fstab if you need to.  Shut down the old system, shut down the new system, move the additional non-system drives over to the new system, and power everything up.  NOW you can call the job done and wipe your hands on pants.

What about upgrading fully to 16.0x? 

No.  Just don’t do it.  (I tried) 

Firstly, it will probably fail. (It did)

Secondly, it will probably eat your life trying to fix the fail. (It tried to do that too, but I gave up after only a couple hours)

Thirdly, 16.04 uses PHP 7, and if you are a web developer like me, early 2017 is still feeling too early to be moving stuff to PHP7.

But if you really want to.

YOU MUST USE ppa-purge AND ERADICATE ALL PPA INSTALLED SOFTWARE FROM YOUR SYSTEM BEFORE YOU EVEN ATTEMPT TO UPGRADE FROM 14.0x TO ANY LATER DISTRIBUTION.  

That is the only chance you have of going through a full upgrade process without complete and utter failure.  (I did not do this when I tried, causing a complete dependency melt-down during the install, which I couldn’t recover from, and had to start this process from scratch again, well, I had to rsync again and boot-repair at least)

I have not yet tried it with a full ppa-purge.  When I do though, I will once again be doing it through this “setup a duplicate system and make that work” process and NOT EVER TRYING TO UPGRADE A LIVE FUNCTIONING SYSTEM THAT YOU CARE EVEN A TINY LITTLE BIT ABOUT.