Tableau Public

Tableau Public is free data visualization software that allows anyone to easily create interactive charts and maps. To use Tableau Public connect to a local file such as a csv or excel file. The tool then allows you to drag and drop data columns and select from a variety of chart types to turn your data into easy to digest visualizations that are called vizzes.


Chart Types – Heat Map – Highlight Table – Symbol Map – Filled Map – Pie Chart – Horizontal Bar Chart – Stacked Bar Chart – Side-by-Side Bar Chart – Treemap – Circle View – Side-by-Side Circle View – Line Charts (Continuous & Discrete) – Dual-Line Chart (Non-Synchronized) – Area Charts (Continuous & Discrete) – Scatter Plot – Histogram – Box-and-Whisker Plot – Gantt Chart – Bullet Graph – Packed Bubbles

In addition to individual vizzes, users can display multiple vizzes in a single dashboard.


Publishing To publish your vizzes, they must be uploaded to Tableau Public. The vizzes can then be included on Web sites using embed code, or the Javascript API.

Project Management for Small Projects

When we work with some of our clients on smaller projects, one of the things we hear a lot of is “This is a small, quick project, we don’t need project management.” But they still want the project to run smoothly and as expected. So how do you achieve this without a full project management plan? In my experience, you can pare back a lot of what may be standard operating procedure on larger projects, but you need to keep what I refer to as my top 5.

Documented Requirements

You can’t move forward without know what it is you have to do! This needs to be documented so that all involved with the project can reference them and understand the expectations for delivery. For some small projects, I have seen requirements and estimates discussed in a phone call, and agreement to move forward with the project based on that phone call. Verbal delivery of the requirements should always be followed up with a quick email, or through shared online tools to ensure that everyone is in agreement. It is far too easy for a customer to come back and dispute what was delivered if it has not been documented.

Documented Assumptions

Those requirements we just talked about? I’m pretty sure they take some assumptions into account, and those also need to be noted so that everyone is on the same page. For example, maybe the overall timeline for the project is a week, but the client doesn’t get you access to their system until day 3. Do they still expect you to deliver at the end of the week while you think you have a week from that point? The assumption that system access will be available at the start of the project is pretty key to that delivery timeline, and needs to be stated clearly. What other assumptions is the client making when they claim this is a quick, easy project?

Documented Risks

Even if a full risk register is not used for a project, risks should still be discussed and noted along with the assumptions. And don’t think that just because something is a small project, that there may not be big risks associated with it. Right now, its summertime here and that means vacations. Does the client have a key team member scheduled for vacation soon? What happens if the project is not done before they leave, does it get put on hold, does a coverage person step in that needs to get brought up to speed? What about a project that is only adding a few fields to an enrollment application? If those are needed to quickly meet a deadline for enrollment, any delays could have a huge impact to the client. Identifying those risks help to keep the focus where it is needed on these short-term projects. Which brings us to…

The Triple Constraint

Every project, no matter how big or small, has constraints. The Triple constraint represents the standard constraints of Cost, Scope, and Time. A change to one of these items, will impact the others, so it is important to know which of these is the most important to your client. As the project deviates from expectations, decisions made on how to progress should be guided by the constraint that holds the most importance to the client. If a specific deadline must be met then Time is your top constraint; some decisions may need to cut scope in order to meet that deadline, or to increase cost and bring in additional team members. Of course, the client or sponsor needs to be the one making the final decision, but you want to present the best options for them based on their key constraint.


Last, but definitely not least, is consistent communication. Decide with your customer what regular communications are needed, and stick with it. Don’t delay with bringing up any issues that the team may be having in order to keep the customer informed. Even if an issue has not impacted the delivery yet, letting the customer know as early as possible will eliminate any surprises later on. Transparency in communication has always, and will always, be my top priority when managing a project. Getting information out to all necessary team members makes for better decision making and a more successful project.

So, whether you have a dedicated Project Manager on your team or your team lead is taking on these responsibilities, project success is the top priority. Are your top 5 tools to ensure success similar to mine? Let us know in the comments.

Drupal 8 – The New Stuff

After attending the recent Drupaldelphia camp in April, I wanted to revisit the recent changes that came about with the Drupal 8 release.

Many new features and changes

  • Core has many additions, including Views, a breakpoint module, multilingual content, CKEditor, and inline editing of blocks.
  • Drupal 8 now uses many parts of the Symfony 2.7 framework – HttpFoundation, HttpKernal, Routing, EventDispatcher, DependencyInjection, and ClassLoader. It also now uses a new templating framework called Twig. ex.
  • The new version has improved responsive design where images can now be resized based on breakpoints and the Admin has been redisned to work on mobile devices.
  • Configuration data is now stored in files, which makes managing configuration with git much easier, and that data can now be imported and exported.
  • The REST and Serialization webservices are now supported.
  • All Core modules can now be found in the Core folder. The modules folder can now contain modules instead of needing to access sites/all/modules.
  • New Fields are now available for Date, Email, Link, Reference, and Telephone.
  • Fields can now be added to Nodes, Blocks, Comments, Contact Forms, Taxonomy terms, and Users.

Drupal 8 also includes updates to Module development.

  • Modules are now a mash-up of old Drupal conventions and Symfony conventions. Porting modules from D6/D7 to D8 is possible but will take a decent bit of effort.
  • .info files are now .info.yml ex.
  • Module development now follows an MVC pattern and is more object-oriented. The General Controllers are placed in the lib/Drupal/[module]/Controller folder following Symfony guidelines so that autoloaders will work. Form Controllers are placed in the lib/Drupal/[module]/form folder.
  • Routing is now done by a routing.yml file.
  • Hooks still remain, such as hook_menu.

Additional information on modules can be found here:


Upgrading to Drupal 8

The suggested method for upgrading a D6 or D7 site to Drupal 8 is to set up a Drupal 8 site, then import the configuration and content, with some caveats. The supported migrations focused on D6 since it is now end of life. Drupal 6 supported migrations are Core, CCK, Link, Email, Phone, and ImageCache modules. Drupal 7 currently only supports migration of content, users, taxonomy, blocks, menus, and filter formats.

Upgrade Requirements:

  • Fresh install of Drupal 8 with the Migrate Drupal module enabled
  • Access to the Drupal 6 or 7 database from the host where your new Drupal 8 site is located.
  • Access to the source site’s files.
  • The Migrate Upgrade module installed and enabled on the Drupal 8 site.
  • Drush 8 (recommended)
  • If migrating private files from Drupal 7, configure the Drupal 8 file_private_path path in settings.php before running the upgrade.

Upgrade Outline

  1. Download and install the required modules. Remember not all modules have been ported to D8 and some have been split – for example, Block has been split into Block and Custom Block.
  2. Run the upgrade
  • UI-based via /upgrade URL is good for lightweight sites. This is useful for small site migrations, you fill in the fields and let the magic happen. You’re rewarded with a pretty report at the end.
  • Drush (preferred) is the only option for large datasets.
  • Drush migrate-upgrade to generate the migrations. You can configure for more control over individual migrations.
  • Drush migrate-import to run migrations for individual items, after migrate-upgrade has been used with –configure-only.

There are some gotchas to look out for when upgrading:

  • Views do not migrate and must be recreated.
  • PHP filtering has been dropped.
  • Many contrib modules have not been ported to D8.
  • A good bit of stuff is still missing from the D7 upgrade system.
  • D8 requires PHP 5.5.9, so older systems may require a PHP update.
  • A full list of known issues (many related to D6) can be found here:

If you’ve found other tricks or tips or gotchas with your Drupal 8 upgrade, let us know in the comments.

Introducing Loopless

One of the advantages to the idea of devops and having a mixed team of systems people and developers, is that it frees you from having only to use the tools written by other people. While using 3rd party tools is almost always preferable, if needed you can also fall back on having your developers whip something up.

On the hosting side of our business one of the big projects we have going on right now is creating tools to help make our server deployments more automated and reliable. As part of that we’ve recently found it helpful to have our developers be able to generate disk images (ie fake hard drives stored in files) for Linux so that they can test changes in a Virtual Machine locally or even be able to deploy them. Now, most any developer or systems administrator these days is going to be working with disk images, whether they are cloud hosting images, local VM images, or installer disk images. You would imagine that given how prevalent they are, there would be easy tooling around creating them.

As it turns out, there are some problems. The steps involved for creating a disk image are:

  • Create a file of the correct size
  • Partition it
  • Create a filesystem on the partitions you need
  • Write files to those partitions
  • Make it bootable

Of those steps, only the first one can be done directly to an image file using the standard tools. Every other step requires the use of what on Unix are called loopback devices, essentially asking the OS to show your disk image as if it was a physical drive on the system, and then using the normal disk utilities. The problem with this is twofold, for one even just setting up the loopback requires a high level of system access, and then even after it’s setup the access level needed to work with it is the same needed to access the actual hard drives on the computer.

For some use cases this is fine, but for us asking every developer to give that level of access, where one mistake could wipe out all of their personal data, to our automatic tools was asking too much. In addition, neither of the two major Linux boot systems these days, grub and syslinux, provide Mac OS X tools for making disks bootable, and a good chunk of our team are Mac users. The only other solution seemed to be to have every developer run a VM just for doing image builds, but this turned out to be a big productivity killer as various components and files needed to be made accessible to the VM.

Ultimately we decided to look around and see if there were libraries that could be stitched together to produce a tool that could do everything we needed. Even there we found somewhat mixed news. We discovered a great library for working with ext4 called lwext4, but for everything else we ended up resorting to simply compiling in source files of a couple command line utilities like gptfdisk and syslinux.

The result is loopless, our new utility for working with disk images looplessly. In addition to handling each of the steps above, it’s also available for Mac, making it the first tool we’re aware of to allow creating bootable Linux disks on Mac OS X.

We’ve been using it as part of our work flow now for a couple of months and in it’s made the process of testing system configuration changes a breeze. In the future we plan on new versions with better error reporting (which is currently a fairly large blind spot), support for sparse vmdk images, and better support for physical disks.

We’ve published it on our public Gitlab in hopes it can save some other team some time.

Elk in a new habitat: The Jungle.

Here at xforty technologies we’ve been using a log aggregation toolkit called ELK. This toolkit is a collection of open source software packages that collect logs and parse them into structured snippets (logstash), store them in a searchable index (elasticsearch), and allow for the searching and dashboard reporting (kibana). Making the setup work well takes some effort, but the results are incredible. We stood up an ELK system and have been running it for about a year.

What can we do with it?


Through my twenty or so years doing software development and systems support I’ve made use of the linux command line tools to parse logs and produce digestible information. Ad-hoc use of grep, sed, awk, wc were the go-to. Maintaining a sharp command of these tools requires regular use. Go 6 months without solving a log parsing problem you’ll be spending some time rooting through cheat-sheets and manual pages.

ELK, using the kibana front end can easily search through log data. Using an easy to digest query format you can quickly find what you’re looking for. We used this very basic Kibana feature for probably six months without diving into any of the more advanced features. Our ELK setup provides very quick searching of our data. Searches usually return sub-second. We’ve been able to support users, troubleshoot problems, identify misbehaving hosted applications as well as identify nefarious scans all with the simple searching interface. Kibana can do far more than simple searching, however.

Simple Visualization

After squashing a performance issue, I decided to tinker with the visualization feature to try to answer some questions I have had like:

What are the IP addresses most dropped from our firewalls?

Over the past seven days, the top five blocked IP addresses all clocked in at over 11,000 blocked connection attempts, the leader with over 27,000 connection attempts blocked.

Screen Shot 2016-04-25 at 11.19.54 AM

What are the most commonly scanned tcp/udp ports?

We figured the best indicator of this was to sum up the number of drops per tcp/udp port. Okay, this one is pretty interesting. The most commonly scanned port seems to be telnet, port 23. Port 5060, Session Initiation Protocol is up there on the list as well.

Screen Shot 2016-04-25 at 11.22.31 AM

We were all pretty surprised to see that port 23, telnet is the clear leader. The only times we see telnet behind another port number is when our time window is short (say 15 minutes) and there’s scan of another port happening.

NOTE: Click on the image to blow it up.

A Note on Scans

There are scans and there are big scans. Run of the mill scans are happening all the time. A few times an hour it seems. These scans are not particularly aggressive or intrusive. We see clients run a sweep of our networks looking for this service or that. The most popular surprisingly seems to be telnet. Lagging far behind telnet are session initiation protocol (SIP), secure shell (ssh),

Every now and then we’ll see a monster scan, a leave no stone unturned level scan. Every IP/ port combination is exhaustively tapped asking the question “is this thing on?”. One such scan asked this very question 2.87 million times over the source of about fifteen hours. Another knocked on the door 821K times in one 24 hour period.

It’s a jungle out there.


If you’ve got log retention or analysis needs either for IT best practices regulation and compliance requirements the good folks over at elastic have got a good solution for you. At xforty we were able to improve our ability to support customers, understand the makeup of our network traffic and identify problems with our firewall policy.

Announcing rebbler

I’ve put together a tool for testing servers for the presence on DNS based mail block lists. We have a few mail servers under our management. On occasion it’s nice to be able to check for their presence on various DNS block lists. My goal is to have a Nagios check plugin as well as command line tool to perform lookups. The first crack at it can be installed as a ruby gem or or found in the rebbler github respository.

In addition to the command line interface and Nagios check functionality it can be run as a web server making use of the Sinatra micro framework.

Feel free to install:

gem install rebbler

And run it:


Or run the server:

rebbler -s

And visit http://localhost:4567/

Drag and drop mult-file uploads (Part II)

Part I describes single and multi-file uploads. This post adds the ability to accept drag and drop uploads.

Place your form inside an appropriately styled div. Call this the drop area. We then attach a few event handlers. One to block the event so that the page is not replaced with the droppped file. The next to handle the drop event in your drop area.

The HTML from Part I expanded to have a #drop_area:

<div id="drop_area">
  <form enctype="multipart/form-data" method="POST">
    <input id="file" multiple="" name="pic[]" type="file">
    <input name="Upload" type="Submit">

JavaScript/ jQuery:

  $(function() {
    document.addEventListener("dragover", function(event) {
    }, true)

    $('#drop_area').on('drop', function(e) {

      var files = e.originalEvent.dataTransfer.files
      $('#file')[0].files = files

      return false

addEventListener() is used because I could not find a method of blocking the event at the page/ document level using jQuery.

The drop handler is attached to the #drop_area tag. It fetches the FileList from the event and assigns it to the file field.

Drag and drop mult-file uploads (Part I)

This is the first in a two part series about native browser based multifile drag and drop uploads.

Last year when I was working on I needed to make uploading multiple files from GPS receivers easy. I was delighted to find out that multifile uploads with drag and drop are well supported in modern browsers. Support in the major browsers is there. Firefox, Chrome, Safari, Opera and IE 10 onward all support the posting of multiple files via HTTP.

I’ll start with a brief explanation of how single file uploads work. From there we’ll show how to modify your html and server side code for multiple file uploads. Finally, in part II, we’ll cover the client side approach to handling drag and drop events to invoke the upload.

Continue reading