IoT Technology: Devices

Discussions of IoT often focus on the technology, so let’s start there. IoT consists of devices, which are the “things” that interact with the physical world and communicate with IoT Back-end systems over a network. There are two types of IoT devices: sensors and actuators.

An IoT system will typically be made of many devices – from dozens to millions – talking to a scaleable Back-end system. This Back-end system often runs in the Cloud. In some cases the IoT devices will talk directly to the Back-end systems. In other cases an additional system called an IoT Gateway will be placed between the devices and the Back-end systems. The IoT Gateway will typically talk to multiple local IoT devices, perform communications protocol conversions, perform local processing, and connect to the Back-end systems over a Ethernet, WiFi, or cellular modem link.

IoT Devices

IoT devices consist of sensors, actuators, and communications. Sensors, as the name implies, read information from the physical world. Examples would be temperature, humidity, barometric pressure, light, weight, CO2, motion, location, Ph level, chemical concentration for many chemicals, distance, voltage, current, images, etc. There are sensors available for an incredible range of information and many more under development. Think of things like a tiny DNA sequencer or a sensor that can detect the presence of the bacteria or virus associated with various diseases – both of these are under development!

Actuators are able to change something in the physical world. Examples would be a light switch, a remotely operated valve, a remotely controlled door lock, a stepper motor, a 3D printer, or the steering, brakes and throttle for a self driving car.

IoT Device Examples

For an idea of the range of low cost IoT compatible sensors take a look at Spark Fun Electronics, a leading source of IoT device technology for prototyping, development, and hobbyists. The sensor section at https://www.sparkfun.com/categories/23 lists over 200 sensors that can be used with Arduino and similar systems. Note that these are basically development and prototyping units – prices in production quantities will be lower.

Some sensors are familiar – temperature is perhaps the most obvious example. But many are more interesting. Consider, for example, the gas sensors: hydrogen, methane, lpg, alcohol, carbon monoxide; all available at prices of $4.95 – $7.95. Combined one of these with an Arduino Pro Mini available for $9.95, and you can build a targeted chemical sensor for less than $20.00.

What can you do with a $20.00 lpg or carbon monoxide sensor? That is the wrong question. Instead, you should be asking the question “what problems am I facing that could be addressed with a low cost network connected sensor?” The point is that there is an incredible and growing array of inexpensive sensors available. The technology is available – what we need now is the imagination to begin to creatively use ubiquitous sensors, ubiquitous networking, ubiquitous computing, and ubiquitous data.

The application of modern electronics technology to sensors is just beginning to be felt. As in many other areas of IoT, the basic capabilities have been around for years – detecting and measuring the concentration of lpg vapor or carbon monoxide isn’t new. Detecting lpg vapor concentration with a sub $20 networked device that feeds the data directly into a large distributed computing system in a form that is readily manipulated by software is new. And huge!

Lpg and carbon monoxide are just examples. The same technologies are producing sensors for a wide range of chemicals and gasses.

The combination of useful capabilities, low cost, network connection, and integration into complex software applications is a complete revolution. And this revolution is just beginning. What happens to agriculture when we can do a complete soil analysis for each field? What happens if we have nutrient, moisture, light, and temperature information for each ten foot square in a field, updated every 15 minutes over the entire growing season? What happens when we have this information for a 20 year period? What happens when this information is dynamically combined with plant growth monitoring, standard plant growth profiles, weather forecasts and climatic trends?

Going further, what if this data is combined with an active growth management system where application of fertilizer, pesticide, and water is optimized for individual micro-areas within a field? Technology is progressing to the point where we can provide the equivalent of hands-on gardening to commercial fields.

As an example of work going on in this area see the Industrial Internet Consortium Testbed on Precision Crop Management at http://www.iiconsortium.org/precision-crop-management.htm.

Posted in IoT, Uncategorized | Leave a comment

The Internet of Things

There area lot of things to explore around IoT – what it is, technology, system architectures, security, implementation challenges, and many others. We will get to all of those, but a great place to start is how we got here and the implications of IoT. Rather than starting with things, let’s start with what is really important – economics.

Just what is the Internet of Things (IoT)? At the simplest level it is devices that interact with the physical world and communicate over a network. Simple, but with very significant implications. Let’s dig into these implications and see how such a simple concept can have such great impact.

The major drivers of IoT are technology, economics, software, and integration. Individually these are significant. Combined they will have a major impact on all aspects of life. Some of this impact is good, and some may be bad. As with many things, good vs. bad will often depend on the implementation and how it is used.

Is IoT New?

A common question is whether or not IoT is something new and revolutionary or a buzzword for old ideas? The answer is “yes”…

Much of the foundation of IoT has been around for quite a while. SCADA systems, or Supervisory Control And Data Acquisition has been around since the 1950’s managing electrical power grids, railroads, and factories. Machine communications over telephone lines and microwave links has been around since the 1960’s. Machine control systems, starting on mainframes and minicomputers, have also been around since the 1960’s.

The big changes are economics, software, and integration. Microsensors and SoC (System on a Chip) technology for CPUs and networking are driving the cost of devices down – in some cases by a factor of a thousand! Advances in networking – both networking technology as well as the availability of pervasive networking – are changing the ground rules and economics for machine to machine communication.

The use of standards is greatly easing integration. Advances in software, software frameworks, and development tools, as well as the availability of functional libraries for many tasks, is creating an explosion in innovative IoT products and capabilities.

But the most significant new factor in IoT is economics. Technology, pervasive networking, and cloud computing are driving the cost of IoT down – in many cases by a factor of a thousand or more! New capabilities in sensors and actuators are opening up new areas of application. Cost reductions this large are often more important than new capabilities as they vastly broaden areas of application.

Another massive change is monetization of data. Companies are increasingly aware of the value of the data captured from IoT systems, especially after extensive analysis and datamining.

Further emphasizing the importance of economics are the new business models are emerging. For example, jet engine companies moved from selling jet engines to selling “thrust hours” – a service based model of supplying power as a service rather than selling hardware. A key part of this is extensive use of IoT to monitor every aspect of jet engine operation to provide more effective maintenance and support of the engines. As an example, Virgin Atlantic reports that their Boeing 787 aircraft produce 500GB of data per flight.

Posted in Uncategorized | Leave a comment

Embedded systems, meet the Internet of Things

An article in Military Embedded Systems magazine discusses the evolution of embedded systems as influenced by the Internet of Things (IoT): Embedded systems, meet the Internet of Things.

The article notes: “In many ways, embedded systems are the progenitor of the Internet of Things (IoT) – and now IoT is changing key aspects of how we design and build military embedded systems. In fact, the new model for embedded systems within IoT might best be described as design, build, maintain, update, extend, and evolve.”

Posted in Uncategorized | Leave a comment

Skills Mastery

You go through a series of stages when learning a new skill. Let’s  look at these stages, covering both the characteristics and the implications of each stage. It is helpful to understand a framework for skill levels, what level you are at – and what level the other members of your team are at.

One powerful model for this is the Dreyfus Model of Skills Acquisition. The Dreyfus Model has been used in a variety of professional settings, including nursing.

Several researchers suggest that it takes roughly 10 years and 10,000 hours of intensive effort to become an expert in a subject. This isn’t just 10 year of experience – it is 10 years of applied, concentrated, progressively more difficult study and practice of the subject. The classic “one year of experience repeated 10 times” will not lead you to mastery. They also estimate that less than 5% of people master even a single subject, much less multiple subjects.

The good news is that many of the skills necessary for achieving mastery of a subject are learned while you are working to master your first subject, and it is then easier and faster to master additional subjects.

An excellent book for understanding how you think and learn – and how to do it better – is Pragmatic Thinking and Learning by Andy Hunt. I have been heavily influenced by this book, and enthusiastically recommend it. It is worthwhile checking out Andy’s website at  www.toolshed.com.

Stages of Skills Mastery (from Wikipedia, the free encyclopedia)

In the fields of education and operations research, the Dreyfus model of skill acquisition is a model of how students acquire skills through formal instruction and practicing. The model proposes that a student passes through five distinct stages: novice, advanced beginner, competent, proficient, and expert.

In the novice stage, a person follows rules as given, without context, with no sense of responsibility beyond following the rules exactly. Competence develops when the individual develops organizing principles to quickly access the particular rules that are relevant to the specific task at hand; hence, competence is characterized by active decision making in choosing a course of action. Proficiency is shown by individuals who develop intuition to guide their decisions and devise their own rules to formulate plans. The progression is thus from rigid adherence to rules to an intuitive mode of reasoning based on tacit knowledge.

Michael Eraut summarized the five stages of increasing skill as follows:

1. Novice

  • “rigid adherence to taught rules or plans”
  • no exercise of “discretionary judgment”

2. Advanced beginner

  • limited “situational perception”
  • all aspects of work treated separately with equal importance

3. Competent

  • “coping with crowdedness” (multiple activities, accumulation of information)
  • some perception of actions in relation to goals
  • deliberate planning
  • formulates routines

4. Proficient

  • holistic view of situation
  • prioritizes importance of aspects
  • “perceives deviations from the normal pattern”
  • employs maxims for guidance, with meanings that adapt to the situation at hand

5. Expert

  • transcends reliance on rules, guidelines, and maxims
  • “intuitive grasp of situations based on deep, tacit understanding” has “vision of what is possible” uses “analytical approaches” in new situations or in case of problems
Posted in Uncategorized | 1 Comment

Report on IoT (Internet of Things) Security

IoT (Internet of Things) devices have – and in many cases have earned! – a rather poor reputation for security. It is easy to find numerous examples of security issues in various IoT gateways and devices.

So I was expecting the worst when I had the opportunity to talk to a number of IoT vendors and to attend the IoT Day at EclipseCon. Instead, I was pleasantly surprised to discover that considerable attention is being paid to security!

  • Frameworks, infrastructure, and lessons from the mobile phone space are being applied to IoT. The mobile environment isn’t perfect, but has made considerable progress over the last few years. This is actually a pretty good starting point.
  • Code signing is being emphasized. This means that the vendor has purchased a code signing certificate from a known Certificate Authority and used it to sign their application. This ensures that the code has not been corrupted or tampered with and provides some assurance that it is coming from a known source. Not an absolute guarantee, as the Certificate Authorities aren’t perfect, but a good step.
  • Certificate based identity management, based on X.509 certificates, is increasingly popular. This provides a strong mechanism to identify systems and encrypt their communications.
  • Oauth based authentication and authorization is becoming more widely used.
  • Encrypted communications are strongly recommended. The Internet of Things should run on https!
  • Encrypted storage is recommended.

Julian Vermillard of Sierra Wireless gave a presentation at EclipseCon on 5 Elements of IoT Security. His points included:

  • Secure your hardware. Use secure storage and secure communications. Firmware and application updates should be signed.
  • “You can’t secure what you can’t update.”
    • Upgrades must be absolutely bulletproof – you can never “brick” a device!
    • Need rollback capabilities for all updates. An update may fail for many reasons, and you may need to revert to an earlier version of the code. For example, an update might not work with other software in your system.
  • Secure your communications
    • Recommends using Perfect Forward Secrecy.
    • Use public key cryptography:
      • X.509 certificates (see above discussions on X.509). Make sure you address certificate revocation.
      • Pre-Shared Keys. This is often easier to implement but weaker than a full Public Key X.509 infrastructure.
      • Whatever approach you take, make sure you can handle regular secret rotation or key rotation.
    • For low end devices look at TLS Minimal. I’m not familiar with this; it appears to be an IETF Draft.

Julian also recommended keeping server security in mind – the security of the backend service the IoT device or gateway is talking to is as important as device level security!

The challenge now is to get actual IoT manufacturers and software developers to build robust security into their devices. For industrial devices, where there is a high cost for security failures, we may be able to do this.

For consumer IoT devices you will have to vote with your wallet. If secure IoT devices sell better than insecure ones, manufacturers will provide security. If cost and time to market are everything, we will get insecure devices.

Posted in Security | Leave a comment

What Can We Do About Superfish?

Perhaps the greatest question about Superfish is what can we do about it. The first response is to throw technology at it.

The challenge here is that the technology used by Superfish has legitimate uses:

  • The core Superfish application is interesting – using image analysis to deconstruct a product image and search for similar products is actually quite ingenious! I have no reservations about this if it is an application a user consciously selects and installs and deliberately uses.
  • Changing the html data returned by a web site has many uses – for example, ad blocking and script blocking tools change the web site. Even deleting tracking cookies can be considered changing the web site! Having said that, changing the contents of a web site is a very slippery slope. And I have real problems with inserting ads in a web site or changing the content of the web site without making it extremely clear this is occurring.
  • Reading the data being exchanged with other sites is needed for firewalls and other security products.
  • Creating your own certificates is a part of many applications. However, I can’t think of many cases where it is appropriate to install a root certificate – this is powerful and dangerous.
  • Even decrypting and re-encrypting web traffic has its place in proxies, especially in corporate environments.

The real problem with Superfish is how the combination of things comes together and is used. And quality of implementation – many reports indicate poor implementation practices, such as a single insecure password for the entire root certificate infrastructure. It doesn’t matter what encryption algorithm you are using if your master password is the name of your company!

Attempting a straight technology fix will lead to “throwing the baby out with the bath water” for several valuable technologies. And a technical fix for this specific case won’t stop the next one.

The underlying issue is how these technologies are implemented and used. Attempting to fix this through technology is doomed to failure and will likely make things worse.

Yes, there is a place for technology improvements. We should be using dnssec to make sure dns information is valid. Stronger ways of validating certificate authenticity would be valuable – someone suggested DANE in one of the comments. DANE involves including the SSL certificate in the dns records for a domain. In combination with dnssec it gives you higher confidence that you are talking to the site you think you are, using the right SSL certificate. The issue here is that it requires companies to include this information in their dns records.

The underlying questions involve trust and law as well as technology. To function, you need to be able to trust people – in this case Lenovo – to do the right thing. It is clear that many people feel that Lenovo has violated their trust. It is appropriate to hold Lenovo responsible for this.

The other avenue is legal. We have laws regulate behavior and to hold people and companies responsible for their actions. Violating these regulations, regardless of the technology used, can and should be addressed through the legal system.

At the end of the day, the key issues are trust, transparency, choice, and following the law. When someone violates these they should expect to be held accountable and to pay a price in the market.

Posted in Security | Leave a comment

Superfish – Man-in-the-Middle Adware

Superfish has been getting a lot of attention – the Forbes article is one of the better overviews.

Instead of jumping in and covering the details of Superfish, let’s look at how it might work in the real world.

Let’s say that you are looking for a watch and you visit Fred’s Fine Watches. Every time you want to look at a watch, someone grabs the key to the cabinet from Fred, uses a magic key creator to create a new key, opens the cabinet, grabs the watch from Fred, studies the watch, looks for “similar” watches, and jams advertising fliers for these other watches in your face – right in the middle of Fred’s Fine Watches! Even worse, they leave the key in the lock, raising the possibility that others could use it. Further, if you decide to buy a watch from Fred, they grab your credit card, read it, and then hand it to Fred.

After leaving Fred’s Fine Watches you visit your bank. You stop by your doctor’s office. You visit the DMV for a drivers license renewal. And, since this article is written in February, you visit your accountant about taxes. Someone now has all this information. They claim they aren’t doing anything with it, but there is no particular reason to trust them.

How does all this work? Superfish is a man-in-the-middle attack that destroys the protection offered by SSL (Secure Sockets Layer). It consists of three basic components: the Superfish adware program, a new SSL Root Certificate inserted into the Windows Certificate Store, and a Certificate Authority program that can issue new certificates.

SSL serves two purposes: encryption and authentication. SSL works by using a certificate that includes a public encryption key that is used to negotiate a unique encryption key for each session. This encryption key is then use to uniquely encrypt all traffic for that session. There are two types of SSL certificates: public and private.

Public certificates are signed. This means that they can be verified by your browser as having been created from another certificate – you have at least some assurance of where the certificate came from. That certificate can then be verified as having been created from another certificate. This can continue indefinitely until you reach the top of the certificate tree, where you have a master or root certificate. These root certificates can’t be directly verified and must be trusted.

Root certificates are connected to Internet domains. For example, Google has the google.com root certificate, and is the only one who can create a signed certificate for mail.google.com, maps.google.com, etc.

Bills Browser Certificates, Inc., can only create signed certificates for billsbrowsercertificates.com. The details are a bit more complex, but this is the general idea – signed certificates can be traced back to a root certificate. If the owner of that root certificate is cautious, you can have a reasonable level of trust that the certificate is what it claims to be.

Your browser or OS comes with a (relatively small) list of root certificates that are considered trusted. Considerable effort goes into managing these root certificates and ensuring that they are good. Creation of new signed certificates based on these root certificates is tightly controlled by whoever owns the root certificate.

Certificate signing is a rather advanced topic. Let’s summarize it by saying that the mathematics behind certificate signing is sound, that implementations may be strong or weak, and that there are ways of over-riding the implementations.

Private certificates are unsigned. They are the same as public certificates, work in exactly the same way, but can’t be verified like public certificates can. Private certificates are widely used and are a vital part of communications infrastructure.

According to reports, Lenovo added a new Superfish root certificate to the Microsoft Certificate Store on certain systems. This means that Superfish is trusted by the system. Since Superfish created this certificate, they had all the information that they needed to create new signed certificates. Which they did by including a certificate authority program which creates new certificates signed by the Superfish root certificate – on your system while you are browsing. These certificates are completely normal, and there is nothing unusual about them – except the way they were created.

Again, according to reports, Superfish hijacked web sessions. Marc Rogers shows an example where Superfish has created a new SSL certificate for Bank of America. The way it works is that Superfish uses this certificate to communicate with the browser and the user. The user sees an https connection to Bank of America, with no warnings – there is, in fact, a secure encrypted session in place. Unfortunately, this connection is to Superfish. Superfish then uses the real Bank of America SSL certificate to communicate with Bank of America. This is a perfectly normal session, and BOA has no idea that anything is going on.

To recap, the user enters their bank id and password to login to the BOA site. This information is encrypted – and sent to Superfish. Superfish decrypts the information and then re-encypts it to send to BOA using the real BOA SSL certificate. Going the other way, Superfish receives information from BOA, decrypts it, reads it, re-encrypts it with the Superfish BOA certificate, and sends it back to you.

Superfish apparently creates a new SSL certificate for each site you visit. The only reason that all this works is that they were able to add a new root certificate to the certificate store – without this master certificate in the trusted certificate store they would not be able to create new trusted certificates.

Superfish can also change the web page you receive – this is the real purpose of of Superfish. In normal operation Superfish will modify the web page coming back from the web site you are visiting by inserting new ads. Think about it – you have no idea of what the original web site sent, only what Superfish has decided to show you!

Superfish is sitting in the middle of all your web sessions. It reads everything you send, sends arbitrary information to an external server (necessary for the image analysis it claims to perform, but can be used for anything), forges encryption, and changes the results you get back.

The real threat of Superfish is that it contains multiple attack vectors and, by virtue of the root certificate, has been granted high privileges. Further, the private key Superfish is using for their root certificate has been discovered, meaning that other third parties can create new signed certificates using the Superfish root certificate. There is no way to do secure browsing on a system with Superfish installed. And no way to trust the results of any browsing you do, secure or not.

Posted in Security | 5 Comments