20 March 2015 Frank Muscarello

Chicago Vote

I Vote for Chicago Tech, Often

by Frank

As a Chicago-based serial entrepreneur, I have a unique perspective on how the city supports its businesses. Whether it’s attracting engineering talent, encouraging innovation or celebrating our wins, I’ve seen it all.

Over the past three years, one thing is clear: The environment in which MarkITx operates today - bolstered by the policies coming out of the mayor’s office as well as the momentum of the tech startup community - has been conducive to our company’s growth.

Much has been written about Chicago’s tech community over the past few years. The number of technology jobs has increased by more than one-third to an estimated 40,000 since 2011. The city’s technology plan and vision includes a commitment to modern infrastructure, smart communities, and technological innovation - all of which are foundational for growth.

From our early days inside 1871 to pitching at Google Demo Day to 4,000 members on 5 continents, MarkITx would not be where we are today if not for the supportive climate fostered by the city’s policies and those who recognize the importance of a strong tech community. We are the first to recognize that innovation and growth of any kind doesn’t happen in a vacuum (we’ve got the battle wounds to prove it). We appreciate the fact that sometimes you have to be disruptive to affect change.

Whether you’re a member of Chicago’s tech community, an advocate for economic growth, or just a Chicagoan proud to live in this city, it’s hard to deny the fact that investing in technology is a positive thing for all of us.

There is a runoff election scheduled for April 7th and I encourage everyone to take part in their democracy. Get out and vote! If you have voter-related questions or you’re not sure where to find your polling place, check out these FAQs.

11 March 2015 Ben Blair

Open Compute 6 Pack

Facebook’s Open Hardware Modular Switch and The Future of Scalable, Flexible Networking

by Ben

Over the past few years, Facebook has been reinventing their datacenters from the ground up, and sharing their resultant hardware designs through OpenCompute. Facebook started with a server, then added a top-of-rack switch called Wedge. Just last month, the Internet giant put the finishing touches on the final networking component, an open hardware modular switch called “6-pack".

With 6-pack, Facebook has created an architecture that enables the company to build a network fabric of nearly any size using a simple set of common building blocks. Those open hardware building blocks include a rack with integrated power backup and redundant distribution, server, storage, top-of-rack switch, and now core fabric switch. Many of these individual components are already available from vendors such as Hyve, Penguin Computing, Quanta and now HP.

Why an open network?

According to Facebook, the key driver behind an open network infrastructure is full operator control. This is something every IT infrastructure operator requires in order to scale quickly and gain efficiencies across the network.

What does full operator control look like in practice? To illustrate, let’s say you want to use Chef to orchestrate your network configuration. If you're running a proprietary hardware platform like Cisco or Juniper, you have to wait for the OEM to add support for Chef. You might be waiting for years. But if each of your network devices is a Linux server that you control, you're not stuck waiting for anyone! You're in the driver's seat, not the OEM.

If the pace that you scale your business is beholden to the release schedule of a specific OEM, you’ve not only put yourself at a competitive disadvantage, but also stifled innovation and growth. According to ISI Technology Research, 80% of a network engineer's time is spent manually configuring the network. It stands to reason that modern devops practices applied to networks running some flavor of Linux will dramatically reduce that percentage.

The Cost Savings are Compelling

If saving engineers more time isn’t enough to justify an honest look at open, modularized components, consider the cost savings. Cumulus Networks published a CapEx Analysis for bare metal switches with a Linux operating system on top of them. In this brief, the cost comparison used a 10-rack leaf-and-spine setup as the example.

List price for networking hardware supporting an Open Compute Project (OCP)-compatible setup from Quanta was $96,179 vs. $755,600 from Cisco and $641,028 from Arista. Even if you add $44K for Cumulus’ 3-year support package, the price differential is still enormous. There is ample room to extract infrastructure savings by replacing incumbent solutions with bare metal switch solutions. These cost savings can then be reinvested into more hardware and other innovations.

Going open is even more compelling when you take into account the OpEx savings.

The Future is Soon, If Not Now

Most businesses are not grappling with the scaling issues that prompted Facebook to develop its own open network. But if history is any guide, the approach will percolate down to all-sized datacenter operators - not just the giants. Facebook will propose that its new 6-pack switch design becomes part of the OCP, and they have committed to continue working with the OCP community to develop open network technologies that are more flexible, scalable, and efficient. However, it's far from certain that OCP will be the winning standards body.

There’s more than a hint of déjà vu underway if you consider what happened in the server market. The same unbundling that swept the server market resulted when OS was separated Hardware. That trend left the server market a low-margin commodity which is exactly where networking is headed. There are strong arguments that it will happen faster with networking because the current pain is greater. As AWS has said, the network is a bigger pain point than servers.

Modular Flexibility #FTW

Looking ahead, there is clearly an optimal state from the end-user point of view. It’s a world where you can order hardware from one of many super-scale manufacturers like Quanta based on open-sourced designs. The lure and ease of shopping around and vendor-hopping will be strong. You can have an integrator build out and deliver your racks to the datacenter space you rent from an established provider like Switch or Equinix. You will run open-source, Linux-based software on everything including server, networking and storage. And you can orchestrate everything with whatever open-source tools that please you at the time (today that might be something like Mesos or Kubernetes + Chef).

There’s no denying that 6-pack, the final component in Facebook’s open network vision, has added fuel to a shift in how we view hardware that will continue to reverberate throughout the tech world. If the end result is faster, more efficient infrastructure and undeniable savings to the bottom line, it’s hard to be anything but bullish on the trend.

For members trading on the MarkITx Exchange, the rapid adoption of open hardware means increasing commoditization and fungibility. And that means more consolidated liquidity which is good for every buyer and seller in the market.

28 January 2015 Dylan Lingelbach

Streaming Dynamo Backups

Announcing our streaming DynamoDB backup library

We recently opened sourced our streaming, Node.js DynamoDB to S3 backup library.

You may ask:

Why create a library for DynamoDB backups? Isn't DynamoDB backed up by Amazon already? Doesn't AWS have support for exporting data?

Yes and yes. However AWS DynamoDB just guarantees that Amazon won't lose your data. What happens if you have a script that deletes your user's table? What if you need to look at the state of the data 2 weeks ago to help reproduce a tricky bug?

You can use Data Pipeline to save DynamoDB to S3, but what if you want to limit the rate at which you request data? Or maybe you want to run the backup locally or you don't want a pipeline for every table?

This is where dynamo-backup-to-s3 comes in.

It is a simple npm module that allows you to stream DynamoDB backups to S3 with a lot of flexibility.

Even though this library is small it's been a huge help to us.

Check it out on GitHub. Feel free to open issues/send PRs if you have suggestions!