Just a guy in Vermont trying to connect all the dots...
Author's posts
Apr 16
R.I.P. Simon Gwatkin
I came to know Simon when I worked at Mitel Networks in Ottawa from 2001-2007, although it wasn't really until the later years when I wound up working in Mitel's Office of the CTO and looking more strategically at what the future might hold for companies like Mitel. A lot of my research and time wound up being spent looking at social media and the changing communication landscape. With Simon's role as head of strategic marketing at Mitel we wound up having any number of quite lengthy conversations about where things were going. We didn't always agree, which led to very useful and valuable discussions. Simon also got me engaged with industry analysts and helped set me down a path that was quite useful in many later years.
I also spent a good bit of time interacting with some of the startups that were being nurtured under the umbrella of Wesley Clover, the investment vehicle of Sir Terry Matthews. Simon was involved with Terry's investments there and had this intense passion for helping new entrepeneurs. (And starting in 2008 had a more permanent role with Wesley Clover.) He was fascinated by the communications business (both the telecom kind of "communications" as well as the marketing/PR kind) and by all the different ways we communicate.
When I was part of the large layoff that happened when Mitel purchased Inter-Tel in 2007, Simon helped with his vast network of contacts to try to find a place for me to land. While I ultimately wound up at Voxeo by virtue of some of the blogging I was doing, a couple of his leads were ones that I explored.
After that we stayed in touch over the years and increasingly found ourselves interacting more with each other through Facebook and social networks. One of his sons was doing something with security work at one point, and I'd shared some VoIP security resources I knew of. A few years back when my wife was dealing with breast cancer, a good friend of Simon's was battling it, too, and so we shared information - and shared with each other that intense frustration of being unable to do a whole lot to help someone we care about.
Along the way we had fun teasing each other about language and many other topics. Being a Brit with his very dry wit, he could always rise to the bait of commentary about the English language (versus "American") or other similar topics.
Simon was a gentleman and just a great guy in so many ways. I never knew his children but knew from his comments and posts that he was quite proud of them. He will be missed - and my thoughts and condolences are certainly with his family right now.
For those in the Ottawa region, or able to get there (and I am not able to do so), the obituary says a memorial service will be held tomorrow, April 17, 2014, at 2:30pm. An online guestbook is available for those who wish to leave messages for his family.
R.I.P., Simon. Thank you for all that you did.
If you found this post interesting or useful, please consider either:
- following me on Twitter;
- adding me to a circle on Google+;
- following me on App.net
- subscribing to my email newsletter; or
- subscribing to the RSS feed
Apr 16
Verizon Wireless Approaching 50% IPv6 In Latest World IPv6 Launch Measurements
The latest World IPv6 Launch measurements of network operators were published yesterday and among the charts available for the top 10 networks was this great one showing Verizon Wireless’ network as almost hitting 50% IPv6 deployment:
The actual measurement this month was 48.71% but on that growth path I expect we should see it climb over the 50% mark by next month.
As we have written about in the past, and as mentioned on the bottom of the World IPv6 Launch measurements page, the measurements are for the % of IPv6 deployment that is seen from each registered network by the four companies participating in the measurements program: Google, Facebook, Yahoo! and Akamai. The different methodologies used by the four companies are explained at the bottom of that World IPv6 Launch page.
Very cool to see this amount of IPv6 deployment happening within a mobile network. How about you? Are you ready for IPv6? If not, you may want to start with our IPv6 resources and please do let us know how we can help you!
Apr 14
As Of April 15, Yammer Will Be Effectively Dead To Me
Yes, indeed:
This app is out of date and will be discontinued on Apr 15, 2014. For the best Yammer experience, please use your web browser.
Except, of course, that the web browser isn't the "best" Yammer experience for me.
This change was first announced earlier this year with a brief statement on Office365 site that had this as an answer for "why?":
We are refocusing our desktop efforts on creating a companion app to our web experience, rather than a replacement for the website. We’ve seen that our users prefer our desktop experience for real-time alerts, but prefer our web experience to post messages and share content. We’ve developed our Windows Notifier with this in mind - the app will provide real-time notifications on desktop to complement and serve as a companion to the Yammer web experience.
As noted by multiple people in the comments to that page, I'm not sure who Microsoft asked, but for many of us the desktop app was the way we preferred to post and share content.
Microsoft notes that they offer a "Desktop Notifier" app for Yammer, which they do, but it has one wee little problem:
Yep, it's only for Windows. I'm a Mac user. They apparently don't care about me.
That app also seems to only generate notifications... not let you actually interact with the Yammer feed. It is very definitely NOT a "replacement". As stated in one of the community support threads on Microsoft's site by a user named Rob Sparre:
A decision has already been made by MSFT to kill the nice streamlined and useful Yammer desktop app and replace it with the horrible desktop notifier and force us to keep a web browser open with a busy and bulky screen layout. It is disappointing to lose the nice app and have to use a big fat confusing web page.You may as well close this thread as I do not foresee any good news about it in the future.
Exactly.
Even if I had the new "app" on my Mac, my only choice now is to use the web interface to interact with Yammer.
The problem with using a web interface is that at any given moment I've typically got 57 zillion web browser windows and tabs open on my system and... somewhere ... buried in all those windows and tabs is going to be Yammer.
Yammer is already "yet-another-place-to-check" that isn't fully part of my workflow, and so I don't check it all that much... but having the separate desktop application provided several benefits on my Mac:
1. A SEPARATE APP I COULD SWITCH TO - If I ever wanted to see what was being posted in Yammer I could just click on the Dock icon or Alt+Tab over to Yammer and check in on the flow of messages.
2. A DOCK ICON WITH NOTIFICATIONS - Similarly, Yammer has its own icon in my "dock" on the bottom of my Mac's desktop where it can get a little red circle with the number of new messages in it. A visual indicator that I might want to go check it out.
3. A SIMPLE, COMPACT WINDOW - As the user Rob Sparre pointed out above, the Yammer Desktop client provides a nice easy way to keep track of the feed and interact with messages there. Simple. Easy.
Now judging from other comments I'm guessing that keeping up another desktop client - and one based on rival Adobe Systems' AIR technology, at that - was too much effort for the current staffing level that Microsoft has for Yammer developers.
I understand. You have to prioritize and part of that involves looking at what you remove. I get that. I'm a long-time Apple user... I'm used to having functionality stripped away. :-)
But it's just a disappointment, particularly that they offer no other replacement for Mac users (and a shadow of a replacement for Windows users).
Yes, there are the mobile apps for iPhone, iPad and Android... but the thing is that when I most post to Yammer it is when I am on my work computers... NOT when I am mobile. I often want to share links to things I am working on or articles I find interesting. I'm not going to switch to a mobile device to do that!
So as of tomorrow I don't expect I'll be using Yammer nearly as much as I have been. Sure, I can login to the bloated, Facebook-like web interface... and sure, I can bookmark it or make a pinned tab or something like this... but as I said, Yammer was already "yet-another-place-to-check". The app just made that easier.
Goodbye, Yammer Desktop... 'twas nice knowing you...
UPDATE, April 15, 2014 - As promised, the Yammer Desktop app is dead today:
If you found this post interesting or useful, please consider either:
- following me on Twitter;
- adding me to a circle on Google+;
- subscribing to my email newsletter; or
- subscribing to the RSS feed.
Apr 14
FIR #751 – 4/14/14 – For Immediate Release
Apr 11
What Can App Developers Learn From Heartbleed?
What lessons can application developers take from the Heartbleed bug? How can we use this experience to make the Internet more secure? Unless you have been offline in a cave for the past few days, odds are that you’ve seen the many stories about the Heartbleed bug (and many more stories today) and, hopefully, have taken some action to update any sites you have that use the OpenSSL library. If you haven’t, then stop reading this post and go update your systems! (You can test if your sites are vulnerable using one of the Heartbleed test tools.) While you are at it, it would be a good time to change your passwords at services that were affected (after they have fixed their servers). There is an enormous list of the Alexa top 10000 out there, but sites like Mashable have summarized the major sites affected. (And periodically changing your password is just a general “best practice”, so even if the site was not affected, why not spend a few minutes to make the changes?)
Client Applications Need Updating, Too
For application developers, though, it is also important to update any client applications you may have that use the OpenSSL libraries to implement TLS/SSL connections. While most of the attention has been focused on how attackers can gain access to information stored on servers, it is also true that a malicious site could harvest random blocks of memory from clients visiting that site. There is even demonstration code that lets you test this with your clients. Now, granted, for this attack to work an attacker would need to set up a malicious site and get you to visit the site through, for instance, a phishing email or link shared through social media. The attacker could then send malformed heartbeat messages to your vulnerable client in an attempt to read random blocks of memory… which then may or may not have any useful information in them.
Again, the path for an attacker to actually exploit this would be a bit complex, but you definitely should test any client applications you have that rely on any OpenSSL libraries.
With all that said, since we have started this “TLS For Applications” topic here are on Deploy360, what are some of the important lessons we can take away from this experience? Here are a few I see coming out of this – I’d love to hear the lessons you take from all of this in the comments.
Security Testing Is Critical
It turns out that this was an incredibly trivial coding error. As Sean Cassidy points out in his excellent Diagnosis of the OpenSSL Heartbleed Bug, the issue boils down to this:
What if the requester didn’t actually supply payload bytes, like she said she did? What if pl really is only one byte? Then the read from memcpy is going to read whatever memory was near the SSLv3 record and within the same process.
There was no checking on the input and this allowed reading from other parts of the computer’s memory. As Cassidy later writes about the fix:
This does two things: the first check stops zero-length heartbeats. The second check checks to make sure that the actual record length is sufficiently long. That’s it.
Today’s XKCD comic shows this all in an even simpler explanation.
This is the kind of trivial mistake that probably every developer has made at some point of his or her life. I am sure that if I were to go back through many lines of code in my past I’d find cases where I didn’t do the appropriate input or boundary testing. It highlights the importance of doing security testing – and of setting up security unit tests that are just done as part of the ongoing testing of the application. It also highlights the need for ongoing security audits, and for reviewers of code submissions to also be testing for security weaknesses. But again, this is a common type of error that probably every developer has made. You need testing to catch things like this.
In this instance it just happens that the mistake was in a piece of software that has now become critical for much of the Internet! Which leads to a second lesson…
Having A Rapid Upgrade Path/Plan Is Important
As people learned about this bug earlier this week there has been a massive push to upgrade software all across the Internet. Which raises the question: how easy is it for your users to upgrade their software in a high priority situation such as this?
In many cases, it may be quite easy for users to install an update either from some kind of updated package or a download from an online application store. In other cases, it may be extremely difficult to get updates out there. In the midst of all this I read somewhere that many “home routers” may be vulnerable to this bug. Given that these are often something people buy at their local electronics store, plug in, and pretty much forget… the odds of them getting updated any time soon are pretty slim.
Do you have a mechanism whereby people can rapidly deploy critical security fixes?
UPDATE: A ZDNet post notes that both Cisco and Juniper have issued update statements for some of their networking products. I expect other major vendors to follow soon.
Marketing Is Important To Getting Fixes Deployed
Finally, Patrick McKenzie had a great post out titled “What Heartbleed Can Teach The OSS Community About Marketing” that nicely hits on key elements of why we’re seeing so much attention to this – and why we are seeing fixes deployed. He mentions the value of:
- Using a memorable name (“Heartbleed” vs “CVE-2014-0160″)
- Clear writing
- A dedicated web presence with an easy URL to share
- A visual identity that can be widely re-used
His article is well worth reading for more details. His conclusion includes this paragraph that hit home for me (my emphasis added):
Given the importance of this, we owe the world as responsible professionals to not just produce the engineering artifacts which will correct the problem, but to advocate for their immediate adoption successfully. If we get an A for Good Effort but do not actually achieve adoption because we stick to our usual “Put up an obtuse notice on a server in the middle of nowhere” game plan, the adversaries win. The engineering reality of their compromises cannot be thwarted by effort or the feeling of self-righteousness we get by not getting our hands dirty with marketing, it can only be thwarted by successfully patched systems.
Exactly!
We need to make it easy for people to deploy our technologies – and our updates to those technologies. (Sound like a familiar theme?)
What other lessons have you taken from this Heartbleed bug? What else should application developers be thinking about to make TLS/SSL usage more secure?
Please do leave a comment here or on social media sites where this article is posted. (And if you’re interested in helping us get more documentation out to help app developers with TLS/SSL, how about checking out our content roadmap for the TLS area? What other items should we include? Do you know of existing documents we should consider pointing to? Interested in writing some documents? Please do let us know.)
P.S. There’s now a post out about the process the Codenomicon team went through in disclosing the bug that is worth reading.
Apr 07
FIR #750 – 4/7/14 – For Immediate Release
Apr 04
TDYR #146 – Running In The Snow And Sleet Of Colorado
Apr 03
Vint Cerf: I want all of you to ask your ISPs what their plan is for IPv6
We need to stop running the experimental version of the Internet and move to the production version of the Internet running IPv6! That was among many points made yesterday in a great Google+ Hangout with TWIT TV host Leo Laporte by Google Chief Internet Evangelist Vint Cerf. He also made a great request to everyone watching to ask their Internet Service Providers (ISPs) about when the ISPs would have IPv6 available. He said (and we hear this, too!) that ISPs are complaining that “no one is asking for IPv6″ – and so he asked all the viewers to start asking their ISPs! The fun part was that you could see tweets happening almost right away from people asking that exact question of their ISPs!
Vint talked about the origins of IPv4 addressing – and said that we’ve all been running “the experimental version” of the Internet ever since… and that we need to move to “the production version” of the Internet running IPv6: “Get your v6 in place so that you can run the 21st century version of the Internet!”
He also talked about how bad Carrier Grade NAT (CGN) is, the rise of the “Internet of Things” and the importance of security…. along with some fun stories about the early days of ARPANET, Slovenia and more. It’s all well worth a listen!
And if you’d like to get started with IPv6, check out our IPv6 resources and let us know how we can help you!
Apr 02
TDYR #145 – How Can We Strengthen The Internet Against Attacks Such As What We See In Turkey
Apr 01
Turkish Hijacking of DNS Providers Shows Clear Need For Deploying BGP And DNS Security
Over the weekend there were extremely disturbing reports out of Turkey of escalations in the attempts by the Turkish government to block social media sites such as Twitter and YouTube. The steps now being taken appear to have the Turkish Internet service providers (ISPs) hijacking the routes to public DNS servers such as those operated by Google and masquerading as those DNS servers to provide answers back to their citizens.
Effectively, the Turkish ISPs, operating to comply with a Turkish government ban, are performing a “man-in-the-middle” (MiTM) attack against their citizens and giving them false information.
The Internet Society made a statement on the subject yesterday, explaining its “deep concern” for the situation, and our Chief Internet Technology Officer Leslie Daigle has described how these recent moves “represent an attack not just on DNS infrastructure, but on the global Internet routing system itself.”
Background
As we noted ten days ago, ISPs in Turkey started out attempting to implement the government’s ban by simply blocking those sites in DNS. When Turkish citizens tried to go to those social media sites, their device would query DNS to get the correct IP address to connect to. The Turkish ISPs who were providing the DNS servers used by the Turkish citizens simply failed to give back a response for Twitter and YouTube.
Turkish citizens found they could get around this block by simply changing their devices’ DNS settings to point to open public DNS resolvers such as those operated by Google.
Predictably, the Turkish ISPs then attempted to block the addresses for Google Public DNS servers and other similar servers. The ISPs then started to engage in the typical kind of “whac-a-mole” game with their citizens where the citizens would find new ways to get around the censorship… and the ISPs would then try to shut down those.
BGP Hijacking
Starting this past Saturday, March 29, though, reports started coming in that the Turkish ISPs were taking this to a whole new level by hijacking routing of the Border Gateway Protocol (BGP) and pretending to be Google’s Public DNS servers (and the servers of other similar services).
In other words, the devices operated by Turkish citizens on Turkish networks were connecting to what they thought were Google’s Public DNS servers (and other services) and were getting back answers from those services.
The answers the Turkish citizens were receiving were just the wrong answers.
Instead of going to Twitter or YouTube they were being redirected to sites operated by Turkish ISPs. Google confirmed this in a post on their Online Security Blog that included in part:
A DNS server tells your computer the address of a server it’s looking for, in the same way that you might look up a phone number in a phone book. Google operates DNS servers because we believe that you should be able to quickly and securely make your way to whatever host you’re looking for, be it YouTube, Twitter, or any other.
But imagine if someone had changed out your phone book with another one, which looks pretty much the same as before, except that the listings for a few people showed the wrong phone number. That’s essentially what’s happened: Turkish ISPs have set up servers that masquerade as Google’s DNS service.
Writing over on the BGPMon blog, Andree Toonk detailed the specifics of the BGP route hijack that took place. Essentially, the Turkish ISPs started “advertising” a more specific route for Google’s Public DNS servers. The way BGP works, Google advertises a route for traffic to get to its servers on its network. As the BGPMon blog post indicates, that is normally a “8.8.8.0/24″ route directing people to AS 15169. However, the Turkish ISPs advertised a specific route for “8.8.8.8/32″ that went to their own network.
In BGP, a router typically selects the most specific route as the one to use to connect to a given IP address. So all the routers on networks connected to Turkish ISPs would use this very specific route instead of the one advertised by Google.
They apparently did this for all of Google’s Public DNS addresses as well as those of other open public DNS providers as well. Over on the Renesys Blog, Earl Zmijewski shared their observations including showing precisely when the hijacking occurred:
The Turkish ISPs are pretending to be Google’s specific DNS servers to everyone who is connected to their network.
Delivering False DNS Information
The Turkish ISPs went a step further, though, in that they set up their own DNS servers that answered as if they were Google’s Public DNS servers. As Andree Toonk wrote on the BGPmon blog:
Turk Telekom went one step further, instead of null routing this IP address they brought up servers with the IP addresses of the hijacked DNS servers and are now pretending to be these DNS servers. These new fake servers are receiving traffic for 8.8.8.8 and other popular DNS providers and are answering DNS queries for the incoming DNS requests.
Stéphane Bortzmeyer also documented this in a lengthy post on his blog where he used the RIPE NCC’s Atlas probe network to show that DNS answers in Turkey are different from those in other areas. The Renesys blog post also confirmed this, as did many posts on social media services and other online sites. A good number of tech media sites have weighed in on the matter as well.
The Need To Secure BGP
From our Deploy360 point of view, this kind of attack against the Internet provides a great case study of why we need to better secure BGP and why we need to get DNSSEC validation more widely deployed.
With BGP, the fact that anyone can advertise a route for any other network means that ISPs can do precisely what the Turkish ISPs have done and hijack routes to masquerade as anyone else. Clearly this is unacceptable. As we talk about on our “Securing BGP” page, and is also detailed more deeply in the BGP Operations And Security Internet-Draft, there are efforts underway to deploy “secure origin validation” so that routers in the network know which advertised routes to trust and which ones not to trust.
If the routers on networks in Turkey had secure origin validation in place, when they received the more specific route from the Turkish ISPs they could have checked the origin, realized that the route advertisement was not coming from the operator of the original network and simply disregarded the more specific route. They would have continued to use the original routes that were advertised by the original network operators.
Now, granted, if the ONLY routes from the networks inside of Turkey out to the rest of the Internet are through a small number of large Turkish ISPs who work with the government to enforce banned sites, then this kind of origin validation will not help the “downstream” networks. While they may disregard the announced specific route because of origin validation, their traffic using the original route will still have to travel through the networks of the small number of large ISPs who can then - within the large ISP networks – perform the BGP hijacking. However, if any of the downstream networks have alternate Internet connections (and this may not be possible within Turkey) they may be able to use routes going out those connections.
It is also useful to note that secure origin validation could help networks outside of Turkey. When a government is causing network operators to mess around with the routing tables that make up the fundamental architecture of the Internet, they are playing with fire. One mistake could have a very large impact on the rest of the Internet, such as the time when a Pakistani ISP rerouted global YouTube traffic to a network in Pakistan back in 2008! In their escalating attempts to block access for Turkish users, it is entirely possible that someone at one of the Turkish ISPs could leak incorrect routes out into the larger Internet. Secure origin validation running on other networks around the Internet would prevent these incorrect routes from being taken seriously.
Where DNSSEC Would Help
On the DNSSEC side, if the Turkish citizens had DNSSEC-validating DNS resolvers running on their local networks or even better on their actual devices, and if, for instance, Google had DNSSEC-signed the DNS records for their Public DNS servers, then Turkish users would be able to know that they were not getting to the correct servers. Note that this would not help them get to new servers… but they would know that they were not getting the correct information. Applications that validated the DNSSEC signatures on information retrieved from DNS could then discard the invalid information and try other ways to get that information.
DNSSEC helps ensure that you are getting to the correct site and not to a site set up by, for example, a spammer or phisher trying to steal your identity. Similarly it could protect you from going to sites set up by a government (or via a government mandate) that are pretending to be a site that they are not. For this to work, of course, the original sites (such as Twitter and YouTube) need to have their DNS information signed with DNSSEC, and users out on the Internet need to have DNSSEC validation happening in their local DNS resolvers.
Which is why we need to get DNSSEC deployed as fast as possible – to ensure that the information that we all get out of DNS is the same information that was put in to DNS by the operators of a given domain… and not the information put in by an attacker, which, in this case, could be ISPs acting on behalf of a government.
Again, this would not necessarily help a Turkish user get to Twitter or YouTube, but would prevent them from going to spoofed sites. Additionally, if the operating system were validating the DNSSEC signatures on name server records the system could have noticed that the information it was getting back from, for instance, Google’s Public DNS, did not validate with the “global chain of trust” and so could have warned that the DNS information was suspicious (or perhaps chosen to try to use additional DNS servers that did validate correctly).
How To Help
The question now is what we do to strengthen the Internet against these kind of attacks on the Internet’s infrastructure. Within our area of focus, we have three requests:
1. Understand how to secure BGP, and do so! - Please visit our “Securing BGP” section of the site, read the BGP Operations and Security Internet Draft, look at our BGP content roadmap and see if there are any documents there that you can contribute to help us build out our content and get more people taking these steps to secure their routers. If you are a network operator, any steps you can take to make your routers more secure will go far.
2. Deploy DNSSEC validation - Wherever you can, turn on DNSSEC validation in any DNS recursive resolvers. The steps to do so are very simple for the common DNS resolvers.
3. Sign your domains with DNSSEC – If you have a domain registered, see if you can sign it with DNSSEC (here are the steps you need) and if you encounter any issues please raise the issue with your domain name registrar, DNS hosting operator, IT department or whomever is blocking the process.
These steps will make attacks on the Internet’s infrastructure such as those happening in Turkey today more difficult and raise the complexity needed by the attackers.
Beyond these steps, this situation clearly points out the need for a wider diversity of Internet access methods. Even with these steps above implemented, Turkish users who are limited to only the specific Turkish ISPs have no choice in receiving their default routes and connections. If more options were to be available in the region, the ability of those users to have access to the information on the Internet would not be restricted.
The Internet needs to be hardened against attacks such as these. Please help make the Internet stronger!