Custom Linux Distro for NVIDIA CUDA Devices

How to get started and build a minimal custom Linux distribution for embedded NVIDIA CUDA-enabled devices using the Yocto Project (YP) and OpenEmbedded (OE).

ELCNA technical talks and in-depth training

Embedded Linux Conference (ELC) is the premier vendor-neutral technical conference on embedded Linux and industrial IoT products. This year, ELC North America will be held in San Diego, August 21-23, 2019, at the beautiful Hilton Bayfront, right on the harbor and just over a pedestrian bridge from Petco Park and East Village.

On Thursday, August 22, Leon Anavi, Senior Software Engineer at Konsulko Group will present a Comparison of Open Source Software Home Automation Tools that allow users to customize the setup depending their own specific needs and manage devices manufactured by different vendors in one place. Leon will focus on popular open source tools, Home Assistant, OpenHAB and Domoticz, and explore the supported embedded Linux development boards on which these platforms can be installed, as well as the IoT with which they can interact out of the box.

Also on Thursday, Vitaly Wool, Senior Staff Engineer and General Manager of Konsulko AB, will give a technical talk on Secure Updates for a Memory Constrained XIP (eXecute In Place) System looking at technology that allows code to be executed directly from flash without copying the code to RAM first. The memory footprint can be optimized very tightly and this permits really low-power IoT Linux appliances. However, there is a big obstacle: no standard secure update process for such systems will work due to the very nature of XIP. How can you update the flash when it must always be ready to execute? This talk will provide some real world answers and examples.

The next day, Friday, August 23, Matt Porter, Konsulko Group CTO, will present a tutorial, Introduction to IIO and Input Drivers to briefly look at the Linux IIO and Input subsystems and how to gather information from hardware documentation to assist in software development. In a guided hands-on lab, students will write a new driver that leverages the IIO and Input kernel subsystems, and create their own game controller driver and use it to play a game on their devices.

For the first time in 2019, Embedded Linux Conference North America will co-locate with Open Source Summit North America. We hope you can join us, along with 800+ developers and technical experts from across the globe for education, collaboration, deep-dive learning, and some good times in San Diego.

Konsulko Group sponsors TuxCon conference

On June 8th and 9th, Konsulko Group is proud to again be a Gold Sponsor of the 6th annual TuxCon conference in Plovdiv, Bulgaria, the 2019 European Capital of Culture.

Headquartered in California, Konsulko works with our customers throughout North America, Europe and Asia. Our European subsidiary, Konsulko Ltd is based in Sofia, Bulgaria.

If you are building a new product, we’d love to talk with you about engaging Konsulko’s engineering expertise and experience on your project. Or if you’re a software developer with a passion for Linux, please contact us about joining the Konsulko team.

Globally Employable Engineers

In 2004, we founded Embedded Alley Solutions, and many things we did for the next five years simply felt intuitive. We were, after all, embedded Linux engineers with little business, customer relationship management and human resources experience.

Intuition served us well. We were doing agile-style software development long before it became mainstream, and soon it seemed like everyone knew our name. Our ability to communicate internally was second to none, using nothing more than emails and irc, creating a close knit team that felt as though everyone is in the same office, although the team was highly distributed.

Our recruiting practice was not so much about hiring as it was about building and fostering relationships with top talent around the world until the time was right for both parties to make the move. Outside the US, we tried to find the best open source talent around the world, rather than building an “outsourcing center.” We grew the company with that stellar talent and were acquired in 2009.

In the years that followed, every now and then I would read advice in an article or business book, written by some acknowledged industry expert, and think “That’s right! That’s exactly what we were doing at EA!” It was interesting and amusing to find out from others that what we had done on the business side helped make Embedded Alley so successful. I note only the business aspects here because the engineering talent we had was hard to match.     

At Konsulko Group, I find myself having many of the same discussions with customers as we did back then, and I lean on that experience. One recurring discussion we had at EA was about our “offshore engineers,” as customers would often refer to engineers who resided outside the US. Embedded Alley had a small office in Europe, as well as single employees working from a home office all over the world. We always told customers that, no, we do not have a two-tier outsourcing strategy and these were not “offshore” developers. We simply searched for the best talent, with very specific software development experience, and that talent is not always to be found next door to our Silicon Valley office.

Fast forward 10 years after the EA acquisition. I recently found myself having the same offshore discussion with a customer. A specific principal level engineer was located in an European country not known as a center for high tech. Why was it that we were proposing to invoice him at the same rate as the other US-based principal engineers? This customer had done their homework and evidence showed that the average engineering rates in this European country are significantly lower.  

I paused on the phone and thought back to our Embedded Alley days. What did I tell customers then? I realized that we never had a good, quantifiable way to explain to a customer why our engineering rates for developers outside the US, though lower than market rates in Silicon Valley, were significantly higher than the typical “outsourcing rates” in a particular location. We had talked about our hiring strategy and looking for the best talent wherever we might find it, but there were no metrics I could lean on.

Then I thought about the work this particular engineer had done at Embedded Alley; and then after the acquisition, he continued at Mentor Graphics (always working remotely from Europe). When he left Mentor, he contracted for Texas Instruments, then another US company, and finally, after the founding of Konsulko Group “rejoined” our team. Meanwhile he had had other opportunities, from contracting gigs to full time jobs in the US with H1B visa sponsorship.

And that’s when it struck me. This was not an engineer working in an offshore office at an offshore salary. “This is a globally employable engineer,” I said on the phone, “and we pay him a US level salary in order to retain him.” I continued to recount his work history and track record. The buyer understood and we moved forward with the deal.

What is a globally employable engineer? In my mind, it’s someone that could get a good job anywhere around the world due to demand of their skills. It’s someone with a minimum of ten years of experience, highly talented, with excellent English language skills and some customer-facing experience. The ability to travel when necessary helps a lot, and that means the ability to get a B1/B2 US visa for occasional visits. Such engineers may choose to continue to live in their home country, or elsewhere in the world, but tapping their talent does not come at offshore salary cost.

It is time for the high tech industry to move beyond Outsourcing 1.0, and embrace the Globally Employable model to access the best engineering talent on earth, wherever on the planet they choose to reside.  

Konsulko Group to present at ALS Tokyo

Now in its eighth year, Automotive Linux Summit connects the Linux developer community with vendors and users to drive the future of embedded devices in the automotive arena.

On Wednesday, July 17, Scott Murray and Matt Ranostay of Konsulko Group will present Building an AGL Telematics Profile Demonstration Platform. This profile serves as a base for building headless telematics device images. Scott and Matt will discuss a practical use case, using the profile to build an AGL demonstration platform for a vehicle tracker or an insurance company’s driver data collection device.

Co-located with Open Source Summit Japan, ALS will be held at this year at Toranomon Hills Forum in Tokyo. Registration information can be found here.


A good time to talk with us

Whether you’re in a small start-up, a huge global company, or anything in-between, there are key moments in Linux-based software development when it’s time to decide how much can be handled in-house, and what requires some outside assistance.

Here are a four examples of a good time to talk with us:

  • Your engineers are experts at the top of your software stack, but kernel-level work needs to be done down near the bottom (where you don’t have much experience).
  • You’re dazzled by the power and complexity of the Yocto project and OpenEmbedded build system, and your team needs to get up to speed quickly.
  • You’re building your next generation product on new hardware and experience unforeseen “subtleties” in moving your code to the new platform.
  • You’ve crafted your software architecture from best-of-breed open source projects but you’re finding gaps that still need to be filled.

With 20-plus years of experience in embedded Linux architecture, development, build/CI, QA, maintenance and training, Konsulko Group can help you at every phase of your product cycle.

Any point in your development is a good time to contact Konsulko to discuss how we can work together.

Building a DIY SOHO router, Part 4

Building a DIY SOHO router using the Yocto Project build system OpenEmbedded, Part 4

In part three of this series I finished putting together what I wanted to have on my SOHO router and declared it to be done. While I plan to revisit the topic of a SOHO router using the Yocto Project and OpenEmbedded, this is the final part of the series. In this part, I want to focus on some of the things that I learned while doing the project.

The first thing is that I learned a lot about IPv6, specifically how it’s usually implemented within the United States for residential customers, and some of the implications of this implementation. The first thing to note is that I’ve been off-and-on trying to enable IPv6 for general IPv6 connectivity at home for some time now. Long before my ISP offered IPv6 service, I used Hurricane Electric to have a IPv6 tunnel and connectivity. This was great and only sometimes lead to problems, such as when Netflix finally supported IPv6 and began blocking well known tunnels for region-blocking reasons. It wasn’t until I started on this project that I decided to try to make real use of  routable addresses for hosting personal services. My expectations, and that of lots of software designed to manage IPv6 as well, are best described in article from RIPE about understanding IP addressing. In short, my house or “subscriber site” should get at least 256 LAN segments to do with as I want. Docker expects to have it’s own LAN segment to manage as part of configuring network bridges. When you have 256 or more LAN segments to use, that’s not a problem at all.

Unfortunately, my ISP provides only a single LAN segment. This is simultaneously more IPv6 addresses than the whole of IPv4 and something that should not be further subdivided in routing terms. I could subdivide my LAN segment, but this would in turn cause me to have to do a whole lot more work and headaches. That’s because at the routing level IPv6 is designed for my segment to be the smallest unit. Rather than deal with those headaches I switched my plans up from using Docker to using LXC. With LXC it’s easy to dump the container onto my LAN and then it picks up an IPv6 address the same way all of the other machines on my LAN do. This is good enough for my current needs, but it will make other things a lot harder down the line, if I want separation at the routing level for some devices.

But why am I doing that at all? Well, one of the benefits of having a small but still capable router is that I can run my own small services. While I don’t want to get into running my own email I think it makes a whole lot of sense to host my own chat server for example. With closed registration and no (or limited later on perhaps) federation with other servers I don’t need to worry about unauthorized users contacting my family nor do I have to worry about the company deciding it’s time to shutdown the service I use.

Another lesson learned is that while the Yocto Project has great QA, there’s always room to improve things. As part of picking a firewall I found that one of the netfilter logging options had been disabled by accident a while back. As a part of writing this series of articles and testing builds for qemux86-64, I found that one of the sound modules had been disabled. As a result, the instructions I wrote back in part 2 wouldn’t work. Working upstream is always fun and these changes have been merged and will be included in the next release.

I also worked on a few things for this project that I didn’t include directly in the relevant part of the series. For example, while I did include a number of full utilities in the list of packages installed in the router, I didn’t talk about replacing busybox entirely. This is something that OpenEmbedded supports using the PREFERRED_PROVIDERS and VIRTUAL-RUNTIME override mechanisms in the metadata. Prior to this article however, there wasn’t a good example on how to do this in upstream. Furthermore, there wasn’t an easy way to replace all of busybox and instead you had to list a single package and then include the rest of the required packages in your IMAGE_INSTALL or similar mechanism.  I am a fan of using busybox in places where I’m concerned about disk usage. However, on my router I have plenty of disk space so I want to be sure that if I have to go and solve a problem I’m not using my swiss army knife but rather have my full toolbox available. As a result, OpenEmbedded Core Master now has packagegroup-core-base-utils and a documented example of how to use that in local.conf.sample.extended. This means that when I refresh this image to be based on the Warrior branch I can remove a number of things from my IMAGE_INSTALL.

Another lesson is that old habits die hard.  In general, I always try to use the workflow where I make a change outside the device I’m working on, build the change in, and test it, rather than editing things live.  But when it’s “just” a quick one line change I’ll admit I do it live and roll it into my next build sometimes.  And then sometimes I forget to roll all my changes back up.  So while implementing this project I tried even harder than usual to not fall into that “just a quick change” mindset.  For the most part I’ve been successful at sticking to the idea workflow.  I really believe stateless is the right path forward.  And “for the most part” means that, yes, one time I did have to make use of the fact that the old rootfs was still mountable and copied a file over to the new rootfs, and then to the build machine.  I like to think of that as a reminder that A/B updates are more helpful than a “rewrite your disk each time” workflow for those occasional mistakes.

The caveat to the lesson above is because I really did the “git, bitbake, mender” cycle on this project. I didn’t start on it quite as soon as I said in the article, and I spent a lot more time toying with stuff in core-image-minimal instead of following my own advice, too.  I suppose that is the difference between writing a guide on how things should be done compared with how you do things when you just want to test one more thing, then switch over.  I really should have switched earlier however as every time I avoid doing the SD card shuffle it’s a win on a number of levels.

Did I say SD card above?  Yes, I did.  For this project, a 64GB “black box” that’s in the form-factor of a SD card will have as long of a life span as there is in the form-factor of a M.2 SSD or any other common storage format.  While my particular hardware has a SATA port, I don’t want to try to fit the required cabling, let alone the device itself in the case that’s recommended.  I will admit that I’m taking a bit of a risk here, I am putting as much frequent-write files under a ramfs as I can and after all, I did say stateless is a goal.  If everything does really die on me, I can be back up and running fairly quickly.

Last thing I learned is something I knew all along, really. I like the deeper ownership of the router. There’s both the pride and accomplishment in doing it and that “old school” fun of being the admin again, for real.

For training, nothing beats hands-on

There are plenty of YouTube videos (and their open source equivalents) to help budding engineers master the intricacies of development, but often the best way to learn is to get in the same room as the experts, and go step-by-step through the process.

At SCaLE 17x in Pasadena earlier this month, Konsulko Group CTO Matt Porter taught a guided hands-on lab on leveraging IIO and Input kernel subsystems. In real time, Matt went line-by-line through the code, and the students were able to write a new driver and take the results with them on an embedded target board.

In this intimate and interactive setting, apprentice-level engineers could get personal attention if they were stuck or had any question, no matter how basic.

Matt’s session was part of the E-ALE (Embedded Apprentice Linux Engineer) project. At major embedded Linux events, E-ALE provides several days of hands-on tutorials driven by volunteer professional speakers who present apprentice-level material in a way that beginners can understand and use.

We hope to see you during the next set of E-ALE tutorials at the Embedded Linux Conference in San Diego this August.

As always, Konsulko Group can also offer hands-on embedded Linux training at your location for your engineers. Please contact us to discuss your requirements for custom, on-site training.

Building a DIY SOHO router, Part 3

Building a DIY SOHO router using the Yocto Project build system OpenEmbedded, Part 3

In part two of this series I created a local configuration layer for OpenEmbedded, and had the build target core-image-minimal producing an image. The image that was produced wasn’t really a router, but did let us bring up our board and look around. In this article, I’m going to create a custom image and populate it with additional software packages configured to my  requirements. I’m also going to get started using Over-The-Air (OTA) software updates on the device.

Now that I’ve proven that the image works on the hardware, I can really get down to implementing the project of making a router.  While I could continue to add things to core-image-minimal, it really makes sense at this point to stop and create my own image. Since I want something relatively small, I will still start with core-image-minimal as the base.  Moving back over to meta-local-soho, I’m creating the recipes-core/images directory and then populating core-image-minimal-router.bb with:

require recipes-core/images/core-image-minimal.bb

DESCRIPTION = "Small image for use as a router"

IMAGE_FEATURES += "ssh-server-openssh"
IMAGE_FEATURES += "empty-root-password allow-empty-password allow-root-login"

IMAGE_INSTALL += "\
    "
MENDER_STORAGE_TOTAL_SIZE_MB = "4096"

This tells bitbake that it must have core-image-minimal.bb available and to include it. I then provide a new DESCRIPTION to describe the new image. Next, I include a number of new features in the image. First, I’ll use the normal hook for adding a SSH server. Then I’ll add a line of features for development mode that I’ll remove later.  These features, as their names imply, allow for root to login without a password. This is quite handy for development and quite unwise for production. I’ll circle back and remove these development features later. Next, I give myself an empty list of additional packages to be filled out later. Finally, I tell Mender that it has 4096 megabytes of disk space to work with.  I’m going to hide space from Mender so that I can entirely control that part myself instead. At this point I can build core-image-minimal-router, and it will complete very quickly as I’ve not yet added any packages that have not been previously built. So it’s time to once again git add, git commit, and bitbake these changes.

At this point, I want to flash the new image onto the device and boot it up. The reason for this is that the new image can be used with Mender to test any subsequent image builds. The system is now functional enough to support delivering new image updates via Mender, so it’s good to get into the habit of using the OTA update workflow. It also forces me to treat the device as if it’s really stateless. I’ll talk about how to apply an OTA update when I make the next set of changes.

Now it’s time to begin adding content to the custom image. The first thing I’m going to do is borrow some logic from packagegroup-machine-base. I don’t want to use this packagegroup directly because it will cause bitbake to build a lot of extra stuff that I don’t end up installing. This is due to the fact that it’s part of packagegroup-base.bb (because it’s needed to resolve dependencies of other parts of the packagegroup). Instead, I’m going to add:

    ${MACHINE_EXTRA_RDEPENDS} \
    ${MACHINE_EXTRA_RRECOMMENDS} \

to IMAGE_INSTALL so that any additional machine-specific functionality that’s been specified is installed to the image. Next, I’ll add in kernel-modules to the list so that all of the modules that have been built for the kernel are installed to the image. This will be a lot easier than listing out every module I may need, especially for later on when it comes to various firewall rules I want to use.  On top of all of this, I also want to drop in a bunch of full-versions of common packages I use, and then let busybox fill in the rest.

    bind-utils \
    coreutils \
    findutils \
    iputils-ping \
    iputils-tracepath \
    iputils-traceroute6 \
    iproute2 \
    less \
    ncurses-terminfo \
    net-tools \
    procps \
    util-linux \

Almost everything in this list can be tweaked as desired. There are a couple items that serve a critical purpose and deserve an explanation:

  1. systemd calls out to $PAGER for many functions, including browsing logs with journalctl. If I don’t have the full version of less available, I won’t have a fully functional pager and browsing the output is extremely difficult.
  2. I don’t use xterm for my terminal emulator anymore so I want ncurses-terminfo installed. This ensures that the right terminfo is available and terminal output is correct.

At this point it’s time for a git add, git commit, and then a bitbake of our image too.

Now that I have a new image with additional content to try out, I want to put it on the device and confirm things work. As mentioned before, I’m using Mender in standalone mode since I have a single deployed device.  It’s very simple to serve the new image and then apply it. On the build machine, I do the following (change qemux86-64 to match the machine in use):

$ (cd tmp-glibc/deploy/images/qemux86-64; python3 -m http.server)

And then on the device:

# mender -rootfs http://build-server.local:8000/core-image-minimal-router-qemux86-64.mender
... wait while it downloads and applies ...
# reboot

Once the device comes back up, I’ve logged back in, and confirmed I’m satisfied with my changes, I do:

# mender -commit

This will mark what I am now running as the valid rootfs. However, if the device didn’t boot up or I couldn’t log in, I would simply not commit the changes. To do that I would then just reboot or otherwise power-cycle the device. If I don’t commit the changes to Mender then I get an automatic rollback to the previous install.  Of course, it’s also possible to use any HTTP server on the build machine.

At this point, it’s time to iterate over adding a number of different features that require little more than adding to IMAGE_INSTALL. Since I’ve talked about LXC, I need to add in lxc and gnupg (for verification of containers used from the download template). Once that’s added, I do the git add, git commit, bitbake, and then mender -rootfs cycle again and confirm LXC is working. One thing I noticed when doing this was that containers didn’t autostart because the service isn’t enabled by default.  Since I’m keeping this stateless, I changed that behavior with a bbappend file.  I also ended up installing e2fsprogs-mke2fs to be able to further partition my device to give LXC some room to work with.  This also means that I needed to have base-files provide the fstab that matches my setup, rather than the stock one.  Another small thing to cover is if your hardware does, or does not have a hardware random number genreator available.  If you do have one, you should pull in rng-tools on the image.  If you don’t have one however, you should install haveged to help feed the entropy pool instead.

Now I need to enable a functional access point. This is the first case where it’s really non-trivial to write up the config file to use, so it’s done a little bit differently.  The first step is to install hostapd and iw and boot that.  Now, on the device, edit /etc/hostapd.conf and iterate on editing and testing it on the device until everything is set up as desired. The iw tool can be helpful here to do things like perform a site scan to see what frequencies are already in use.  Once I’m done with the config, I copy the file out from the target and over to my build server with scp as /tmp/hostapd.conf. Then it’s time to make it stateless:

$ mkdir -p recipes-connectivity/hostapd/hostapd
$ cp /tmp/hostapd.conf recipes-connectivity/hostapd/hostapd/

And then I edit recipes-connectivity/hostapd/hostapd_%.bbappend to look like this:

FILESEXTRAPATHS_prepend := ":${THISDIR}/${PN}"

SRC_URI += "file://hostapd.conf"

do_install_append() {
    install -m 0644 ${WORKDIR}/hostapd.conf ${D}${sysconfdir}
}

SYSTEMD_AUTO_ENABLE_${PN} = "enable"

This will do two things. Everything except that last line is to tell bitbake to look in my layer for hostapd.conf and then to install it. The last thing is that now that we have a configured AP we want to start it automatically so have it be an enabled systemd service. Now it’s time once again for the git add, git commit, and so forth cycle.

The next step is to do the same kind of thing to dnsmasq. The good news that this time, the dnsmasq_%.bbappend file only needs one line:

FILESEXTRAPATHS_prepend := ":${THISDIR}/${PN}"

This is because the rest of the recipe already knows to grab dnsmasq.conf from a local file. In the case of my network, I need to pass in a few special options to some DHCP clients and have certain clients be given certain IP addresses, so I’ve gone with dnsmasq as my light-weight, but still fully featured IPv4 configuration server. I could have just as easily gone with ISC DHCPD instead, and it would look much the same as the above.  Conversely, if I didn’t need those few extra rules, I could just let systemd handle DHCP serving.  I left out IPv6 from my statement there as I am letting systemd handle that.

The only thing missing at this point from a router, aside from turning off developer mode features, is to add in a firewall. There are a few ways to go about this.  I already have systemd handling one of the aspects that is often associated with a firewall, setting up IPv4 NAT.  If the only other thing I needed on top of this is to shut the rest of the world out, I can use ufw and potentially even leverage its features that allow for adding iptables commands directly for slight enhancements.  While I have gone that direction for some projects, it’s not a good fit for this one. Instead, I chose to go with arno-iptables-firewall because I’m going to have a more complex setup. The process of customizing the firewall configuration is similar to how I customized hostapd and dnsmasq. That is, I iteratively configure it on the device, test for functionality, and copy the configuration files to my host.  This time, however, the arno-iptables-firewall_%.bbappend will look a little different:

FILESEXTRAPATHS_append := ":${THISDIR}/files"

SRC_URI += "file://firewall.conf \
            file://custom-rules \
"

do_install_append() {
    install -m 0644 ${WORKDIR}/firewall.conf \
    ${D}${sysconfdir}/arno-iptables-firewall/
    install -m 0644 ${WORKDIR}/custom-rules \
    ${D}${sysconfdir}/arno-iptables-firewall/
}

I have two files this time. The first one is the main config file, and the second one is the file that contains my custom rules. This is only necessary because I have a number of custom rules, otherwise it could be omitted.

At this point, looking back at the feature list I laid out in part one, I believe I can check all of my items off now.  I have the following all operational:

  1. access point
  2. firewall
  3. IPv4 and IPv6 network configuration
  4. containers 
  5. OTA software update

I’m building all of my software as hardened as my compiler will allow.  There’s very little state on the router itself to worry about backing up, and everything else is handled by my build server being backed up.  I’m confident in my OTA configuration as I’ve been using it for some time now in the development workflow. I’ve also tweaked the installed package list so that all of my favorite sysadmin tools are available.

At this point, it’s time to lock things down. First up, it’s time to go back to core-image-minimal-router.bb and remove that second line worth of IMAGE_FEATURES. Instead, I’m going to create a new local-user.bb recipe with my own user and SSH key. After listing local-user in IMAGE_INSTALL, I copy meta-skeleton/recipes-skeleton/useradd/useradd-example.bb to somewhere in meta-local-soho, and change it to look like this:

SUMMARY = "SOHO router user"
DESCRIPTION = "Add our own user to the image"
SECTION = "examples"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"

SRC_URI = "file://authorized_keys"

S = "${WORKDIR}"

inherit useradd

# You must set USERADD_PACKAGES when you inherit useradd. This
# lists which output packages will include the user/group
# creation code.
USERADD_PACKAGES = "${PN}"

USERADD_PARAM_${PN} = "-u 1200 -d /data/trini -r -s /bin/bash trini"

do_install () {
install -d -m 0755 ${D}/data/trini
install -d -m 0700 ${D}/data/trini/.ssh

install -m 0600 ${WORKDIR}/authorized_keys ${D}/data/trini/.ssh/

# The new users and groups are created before the do_install
# step, so you are now free to make use of them:
chown -R trini ${D}/data/trini
chgrp -R trini ${D}/data/trini
}

FILES_${PN} = "/data/trini"

# Prevents do_package failures with:
# debugsources.list: No such file or directory:
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"

Now, there’s one slight problem. I added myself with a user under /data which is excluded from Mender updates. The good thing is I get persistent history and so forth.  The bad thing is I’m not installed there yet.  So I either need to re-flash one last time or manually copy the files over from the filesystem image to the device before I reboot.  Finally, I need to enable myself to use sudo. In addition to adding sudo to IMAGE_INSTALL I also need to either tweak the sudo recipe so that /etc/sudoers.d/ is looked under, tweak it so that anyone in the wheel group can use sudo and add a wheel group, or borrow the example from meta/recipes-core/images/build-appliance-image_15.0.0.bb and do the following in core-image-minimal-router.bb:

# Take the example from recipes-core/images/build-appliance-image_15.0.0.bb
# on adding more sudoers
fakeroot do_populate_poky_src () {
    echo "trini ALL=(ALL) NOPASSWD: ALL" >> ${IMAGE_ROOTFS}/etc/sudoers
}
IMAGE_PREPROCESS_COMMAND += "do_populate_poky_src; "

With all of that built, deployed, and unit tested, it’s time to go live.  My SOHO router is done and ready for production.  It’s now on me to make sure this stays up to date, which in some ways is a lot better than the alternative.  With my previous router, I only had an non-volatile RAM dump specific to the model of router as a backup. I now have my complete configuration containing firewall rules, DHCP options, and more saved. Since starting on the project I have even braved a few OTA updates and had minimal downtime.

This concludes the walk through of building a SOHO router with OpenEmbedded. In the final part of this series, I will describe some of the lessons I learned while designing and implementing this project.

[Go to Part Four of the series.]

Building a DIY SOHO router, Part 2

Building a DIY SOHO router using the Yocto Project build system OpenEmbedded, Part 2

In part one of this series I explained some of my motivations for this project. Now it’s time to start on implementing the project itself. At this point I’m going to assume the reader has basic familiarity with using OpenEmbedded.  Otherwise, the Yocto Project (YP) quickstart guide and OpenEmbedded getting started pages are useful to bring yourself up to speed.  I assume that you’ve followed these instructions on how to prepare your build host and gone so far as to have completed a build for some target previous to this.  In this guide, I’m going to do my best to follow common best practices. Whenever I deviate from best practices, I’ll explain why we want something a bit different. After all, best practices are supposed to be taken as guidelines, not absolutes. Finally, I’m going to be working against the Yocto Project thud code-name release as that’s what is current as of this writing.

The first thing I’m going to do in my project is create a build directory now so that I can easily get access to various tools.  The next thing I’m going to do is use those tools to make a layer to store our configuration in.

$ . oe-core/oe-init-build-env
You had no conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to, for
example, select a different MACHINE (target hardware). See conf/local.conf
for more information as common configuration options are commented.
...
You can also run generated qemu images with a command like 'runqemu qemux86'
$ bitbake-layers create-layer ../meta-local-soho NOTE: Starting bitbake server... Add your new layer with 'bitbake-layers add-layer ../meta-local-soho' $ bitbake-layers add-layer ../meta-local-soho NOTE: Starting bitbake server... $

Be sure to change oe-core to wherever the core layer was checked out.  Now that the layer has been created, let’s go ahead and start by putting it into git. Why? I am a firm believer in “commit early and commit often” as well as “cleanup and rebase once you’re done”. To me, one of the big selling points of git is that you can track your work incrementally, and when you notice unexpected breakage later on you can easily go back in time and locate the bugs.

$ cd ../meta-local-soho
$ git init .
$ git add *
$ git commit -s -m "Initial layer creation"
$ git branch -m thud

With the first commit in place, it’s time to start customizing the layer. We don’t need the example recipe, so let’s remove it.

$ git rm -r recipes-example/example
$ git commit -s -m "Remove example recipe"

Next, it’s time for a more substantive set of customizations. I’ll start by editing the README. Why? To start with, I’m going to list all of the layers I know of that are dependencies at this point. The README will also be a handy place to note which physical port is for WAN, which port(s) are for LAN, and the network devices that each port is associated with. For now, I add the following to the README:

  URI: git://git.openembedded.org/meta-openembedded
  branch: thud
  layers: meta-oe, meta-python, meta-networking, meta-filesystems

  URI: git://git.yoctoproject.org/meta-virtualization
  branch: thud

  URI: https://github.com/mendersoftware/meta-mender
  branch: thud

Now that I’ve documented these requirements, I’ll also enforce them in code. I edit conf/layer.conf and document these requirements too. The end of the file should look like:

LAYERDEPENDS_meta-local-soho = "core"
LAYERDEPENDS_meta-local-soho += "openembedded-layer meta-python"
LAYERDEPENDS_meta-local-soho += "networking-layer filesystems-layer"
LAYERDEPENDS_meta-local-soho += "virtualization-layer mender"
LAYERSERIES_COMPAT_meta-local-soho = "thud"

Since there are so many layers in use, I will make use of the TEMPLATECONF functionality so that the build directory will be populated correctly to start with. Next, I copy over meta/conf/bblayers.conf.sample, meta/conf/local.conf.sample and meta/conf/conf-notes.txt from the core layer over to the conf directory and commit them without change. Why? This will isolate my local edits later on and make my life easier next year when I decide it’s time to update to a current release. After I’ve committed those files, I the edit conf/bblayers.conf.sample and change it to look like:

BBLAYERS ?= " \
  ##OEROOT##/meta \
  ##OEROOT##/../meta-openembedded/meta-oe \
  ##OEROOT##/../meta-openembedded/meta-python \
  ##OEROOT##/../meta-openembedded/meta-networking \
  ##OEROOT##/../meta-openembedded/meta-filesystems \
  ##OEROOT##/../meta-virtualization \
  ##OEROOT##/../meta-mender/meta-mender-core \
  ##OEROOT##/../meta-local-soho \

Note that this assumes a directory structure where I’ve put all of the layers I will use in the same base directory. If I didn’t do that then I would need to adjust all of the paths to match how to get from the core layer to where the layers are stored. Now I want to use git add and commit all of these changes, so I can move on to the next step.

I will enable systemd next, and while there’s a number of places I could do this, including going so far as to create my own “distro” policy file, for this article I’ll just use conf/local.conf.sample to store these changes. While the core layer’s conf/local.conf.sample.extended has an example on switching to systemd, I’ll do it slightly differently. At the end of the conf/local.conf.sample insert the following, git add and then commit:

# Switch to systemd
DISTRO_FEATURES_append = " systemd"
VIRTUAL-RUNTIME_init_manager = "systemd"
VIRTUAL-RUNTIME_initscripts = ""
VIRTUAL-RUNTIME_syslog = ""
VIRTUAL-RUNTIME_login_manager = "shadow-base"
DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"

This differs from the core example in a few ways. First, I blank out pulling in an initscripts compat package. While this can be useful, for these images it’s going to end up being redundant. Next, I blank out a syslog provider as I’ll be letting systemd handle all of the logging. Finally, while busybox can be available and provide the login manager, I’ll be using shadow-base instead. This particular change is something now done upstream and will be in later releases, so keep that in mind for the future.

The next functional chunk for conf/local.conf.sample is enabling support for virtualization through meta-virtualization. That layer has its own well documented README file, and taking the time to go over it is a great idea. In my configuration, I’m just going to enable a few things that give me the minimal functionality. So add, git add and git commit:

# Add virtualization support
DISTRO_FEATURES_append = " virtualization aufs kvm"

When talking about system security there are many aspects. One aspect is to harden the system as much as possible by having the compiler apply various build-time safety measures that turn certain classes of attack from “exploit the system” to “Denial of Service by crashing the application”, or even “this is wildly unsafe code, fail to build”. To enable those checks in the build, add, git add and git commit:

# Security flags
require conf/distro/include/security_flags.inc

There’s one last bit of functionality I want in the conf/local.conf.sample file, and that’s support for Mender. How do I go about that? That’s going to depend a bit on what hardware you’re going to do this project on, as it’s a little bit different for something like the APU2 than on an ARM platform. Fortunately, there’s great documentation on how to integrate Mender here. While reading over all of the information there is important, it’s best to focus on the Configuring the build section to understand all of the required variables. In fact, now is a good time to talk more about how I’m going to use Mender in this particular setup. Looking at the overall Mender documentation, it’s very flexible. For this very small deployment scenario, I’ll use standalone mode rather than managed mode. This lets me skip all of the things about setting up a host of other services to make upgrades happen automatically. So, we’ll follow the instructions for configuring Mender in standalone mode. The next thing to touch on is how to handle persistent data. One way to do this would be to make sure that anything which needs to be manually customized or that will be persistently changed at runtime is written somewhere under /data on the device rather than at its normal location. This is because the normal location is going to change when I apply a new update but /data will always be the same. For a lot of deployments, this idea works best, as there are likely many users, and beyond our initial configuration the end user will make changes I know nothing about. For my use case, I am the user, and making the system as stateless as possible will in turn make backing the system up as easy as possible. So instead I will be modifying recipes to include our local configuration when needed. Having an easy to access and modify backup of the various configuration files and so forth will make my life easier in the long run if, for example, something happens to the hardware and it needs to be replaced.

Now that I have everything configured, it’s time to create the build directory. This will let me take advantage of all of the configuration work I just did. At this point, I can go back to the shell, leave meta-local-soho and do this:

$ TEMPLATECONF=../meta-local-soho/conf . oe-core/oe-init-build-env build-router

Again, be sure to change oe-core to wherever the core layer was checked out. Running cat conf/bblayers.conf will show how the changes that were made were now expanded.

What to do next? Well, that depends a little on what I already know about the system. The next step in this project is to configure systemd to take care of creating the networks. If I knew what all of the devices will be named, I could move on to that step. But if I didn’t know, or wasn’t sure, the next step is to build core-image-minimal. That’s as easy as doing:

$ bitbake core-image-minimal

and waiting for the final result, assuming that local.conf.sample file was configured to default to the appropriate target MACHINE. Otherwise I’ll need to pass that in on the command line above. Once that completes, take the appropriate image and boot it. The reward should be a root login prompt.  In this configuration, there’s no root password right now. Login and do:

# ip link

and this will show me what the interface names are. Assuming there is a DHCP server somewhere already, I can then use udhcpc -i NAME after plugging in an Ethernet cable to see which interface is which and note it down for the next step.

Now that I have my network interface names, I can configure systemd to handle them. In my specific case I have 4 Ethernet ports, and coincidentally the one closest to the physical console port (enp1s0) is the one I wanted to call the WAN port. This lets me put the other 3 ports into a bridge for my LAN. Now, to configure these interfaces, I’m going to dump some files under /etc/systemd/network/, and I will use the base-files recipe to own these files, rather than systemd itself. Why? Whenever a change forces a rebuild of systemd that forces a rebuild of a lot of other packages, so this allows me to isolate that kind of build churn. Now I go to meta-local-soho and enter the following:

$ mkdir -p recipes-core/base-files/base-files
$ cd recipes-core/base-files/base-files

I’m going to create four different files. Note that all of the filenames are arbitrary and are meant to help poor humans figure out what file does what. If another naming scheme is helpful, it can be used just as easily. First I’ll take care of the WAN port by creating wan-ethernet.network with the following content:

# Take the eth port closest to the console port for WAN
[Match]
Name=enp1s0

[Network]
DHCP=yes

Next, I’ll create our bridge for the other ports and this is done with two files. First I need a lan-bridge.network with:

# Bridge the other 3 remaining ports into one.
[Match]
Name=enp2s0 enp3s0 enp4s0

[Network]
Bridge=br0

Second I create br0.netdev with:

[NetDev]
Name=br0
Kind=bridge

And now I have a bridge device that I can use to manage all of our LAN ethernet ports. Later I’ll even put the AP on this bridge for simplicity. Now that I have a bridge, I need to configure it, so create bridge-ethernet.network and add:

[Match]
Name=br0

[Network]
Address=192.168.0.1
IPForward=yes
IPMasquerade=yes
IPv6AcceptRA=false
IPv6PrefixDelegation=dhcpv6

[IPv6PrefixDelegation]
RouterLifetimeSec=1800

There’s a lot in there, but it can be broken down pretty easily. Working my way up from the bottom, the first portion is needed to have the IPv6 address on the WAN port pass along what’s needed to the LAN to have devices configure themselves. Since we still live in a world with IPv4, I need to enable masquerade and forwarding. Finally, I configure a static IP for the interface that matches to br0. I haven’t talked about configuring the LAN for IPv4 yet. This is going to be a bit more complex and while systemd supports a trivial DHCP server, I want to do more complex things like assign specific addresses, so that will come later.

Finally, it’s time to make use of these new files. I’ll head up a level and back to recipes-core/base-files and create base-file_%.bbappend with the following:

FILESEXTRAPATHS_prepend := ":${THISDIR}/${PN}"

SRC_URI += "file://wan-ethernet.network \
            file://lan-bridge.network \
            file://bridge-ethernet.network \
            file://br0.netdev \
" do_install_append() { # Add custom systemd conf network files install -d ${D}${sysconfdir}/systemd/network # Add custom systemd conf network files install -m 0644 ${WORKDIR}/*.network ${D}${sysconfdir}/systemd/network/ install -m 0644 ${WORKDIR}/*.netdev ${D}${sysconfdir}/systemd/network/ }

What this does is to look in that directory where I created those 4 files, add them to the recipe and then finally install them on the target where systemd will want them. At this point, I can git add the whole of recipes-core/base-files, and git commit.

There’s one last thing I want to configure right now. I’m going to have systemd handle things like turning on NAT, and doing some other basic iptables work. As a result, I need to enable that part of systemd. Another thing I’ll want to do here is work around a current systemd issue with respect to bridges. To do so, I need to make the directory recipes-core/systemd/ and create the file systemd_%.bbappend in that directory with the following:

do_install_append() {
    # There are problems with bridges and this service, see
    # https://github.com/systemd/systemd/issues/2154
    rm -f ${D}${sysconfdir}/systemd/system/network-online.target.wants/systemd-networkd-wait-online.service
}

PACKAGECONFIG_append = " iptc"

The first part of these changes is to work around the github issue mentioned in the comment. While it’s quite frustrating that the issue in question has been open since the end of 2015, I can easily work around it by just deleting the service for my use case. The second part is to add iptc to the PACKAGECONFIG options, and in turn that part of systemd will be enabled and it can support iptables so the IPMasquerade flag above will work.

At this point, I can now build core-image-minimal again and have a system that will automatically bring up the network. However, it is not a router yet. In the next part of the series I will create a new image that is intended to be a router and customize that.

[Go to Part Three of the series.]