Category: Security

Webinar – May 18 – WannaCry Ransomware: Why is it happening and (how) is it going to end?

What is happening with the WannaCry ransomware that has been attacking unpatched Windows computers around the world? How will it all end? What do we need to do collectively to deal with attacks like this? (Hint: Read Olaf’s post.)

To learn more and pose questions to a panel of experts, you can join our partners at the Geneva Internet Platform and Diplo Foundation for a webinar on “Decrypting the WannaCry ransomware: Why is it happening and (how) is it going to end?

  • Thursday, May 18 at 11:00 UTC (13:00 CEST) 

Read more on the event page – and register for free.

Our Niel Harper, author of the recent post ”6 Tips for Protecting Against Ransomware“, will participate as one of the panelists.

As noted in the session abstract:

The webinar will provide an analysis of the main technological, geopolitical, legal, and economic aspects of the ransomware. Experts from different fields will discuss why ransomware has become a major issue. Can such attacks be prevented by technological measures alone? Is there a need for a legal response, such as Microsoft’s proposal for the Digital Geneva Convention? Is raising more awareness among users the ultimate solution?

The webinar will discuss whether it is possible to put a stop to malicious software, or whether they should be considered the price we have to pay for the many advantages of the Internet. Choices on policy will have to be made sooner rather than later. The aim of the  discussion is to explore and help make informed policy choices.

We encourage you to attend and share the information with others.

NOTE: If 11:00 UTC is a bit too early or late for you, the webinar will be recorded so that you can view it later.

To help understand more, the Geneva Internet Platform Digital Watch team has prepared this excellent page of information:

See also our blog posts:

Image credit: a screenshot of the WannaCry visualization provided by MalwareTech.

The post Webinar – May 18 – WannaCry Ransomware: Why is it happening and (how) is it going to end? appeared first on Internet Society.

Encryption is critical for business communication

Imagine if all your business contracts were sent to customers written on postcards. Everyone who happened to see the postcard could see exactly what you were going to charge the customer, how many of your product the customer is going to order – and all of the information about the customer.

Your competition, naturally, could take that information and send a contract to that customer of yours that undercuts your proposal and offers better terms. They could also share that information with others to let them know that this customer buys from you. (Or, at least, they used to!) Your customer, too, could potentially see what you are charging other customers.


In the physical world, of course, we don’t do this. We fold up contracts and we put them in envelopes. We might then put the sealed envelope inside a larger courier envelope. If we are really paranoid we might put them inside “tamper-proof” envelopes – or envelopes that can only be opened with a specific key.

But in the online world we don’t have these same protections by default. Every message you send has historically been broken down into many small packets and sent – unprotected – across the Internet. This is the digital equivalent of sending everything on postcards.

We need to protect our online business communication.

We need digital envelopes

The solution we have is to use encryption to protect our online information. We need to stop sending postcards – and put digital envelopes around all of our data.

We need to encrypt the information when we are sending it between people. We do this today online with technologies such as the HTTPS “lock” we see in our browsers (which is actually Transport Layer Security or “TLS”, formerly called “SSL”).

If we are to have safe, secure, and trusted economic transactions over the Internet we must know that only the people involved with the transaction can see the information.

We need digital envelopes – THAT is why we need encryption.

Learn more:

P.S. Some readers might notice that regular physical envelopes can be opened at the post office, in the company mail room, by customs officials at a border, or by other people who intercept the envelopes. That is true in the online world, too. There are different types of encryption. Some can be intercepted by people in the middle (what we call “hop-by-hop” encryption) and some types of encryption are secure between the sender and receiver (what we call “end-to-end” encryption). But that’s the topic for another blog post…

The post Encryption is critical for business communication appeared first on Internet Society.

Rough Guide to IETF 98: DNS Privacy and Security, including DNSSEC

It is a remarkably quiet week for DNS security and privacy topics at the IETF 98 meeting in Chicago next week. Both the DANE and DPRIVE working groups are moving along very well with their work on their mailing lists and so chose not to meet in Chicago. Similarly, with DNSSEC deployment steadily increasing (as we outlined in the 2016 State of DNSSEC Deployment report in December), the work to be discussed in DNS Operations (DNSOP) is more about exploring ideas to make DNSSEC even more secure.

Here is a quick view of what is happening in Chicago.

IETF 98 Hackathon

Over the weekend (25-26 March) we’ll have a good-sized “DNS team” in the IETF 98 Hackathon working on various projects around DNSSEC, DANE, DNS Privacy, using DNS over TLS and much more. This time the work will include a team looking at how some DNS toolkits can work with the impending Root KSK Rollover in October 2017. More specific information is in the IETF 98 Hackathon wiki. Anyone is welcome to join us for part or all of that event.

DNS Operations (DNSOP)

The DNS Operations (DNSOP) Working Group meets on Monday afternoon from 13:00-15:00 CDT. The DNSOP agenda includes the following items related to DNSSEC:

Some of the other discussions, such as DNS over TCP, also have potential impacts on DNS security and privacy.

DNS Service Discovery (DNSSD)

On Tuesday, the  Extensions for Scalable DNS Service Discovery (DNSSD) Working Group meets from 16:40-18:40 CDT. DNSSD is not one of the groups we regularly follow as its focus is around how DNS can be used to discover services available on a network (for example, a printer or file server). However, in Chicago the DNSSD agenda specifically has a discussion around “Privacy Extensions” (see draft-ietf-dnssd-privacy).

DNSSEC Coordination informal breakfast meeting

Finally, on Friday morning before the sessions start we are planning an informal gathering of people involved with DNSSEC. We’ve done this at many of the IETF meetings over the past few years and it’s been a good way to connect and talk about various projects. True to the “informal” nature, we’re not sure of the location and time yet (and we are not sure if it will involve food or just be a meeting). If you would like to join us, please drop me an email or join the dnssec-coord mailing list.

Other Working Groups

Right before the DNSSD Working Group on Tuesday, the Using TLS in Applications (UTA) WG will meet from 14:50 – 16:20 and will be covering several ideas for “Strict Transport Security” (STS) for email. While not directly tied to DNSSEC or DANE, they do use DNS for these security mechanisms. And then in the final session on Friday, from 11:50-13:20, the IPSECME WG will have a discussion about “split DNS” and how that impacts VPNS (see draft-ietf-ipsecme-split-dns).

P.S. For more information about DNSSEC and DANE and how you can get them deployed for your networks and domains, please see our Deploy360 site:

Relevant Working Groups at IETF 98:

DNSOP (DNS Operations) WG 
Monday, 27 March 2017, 13:00-15:00 CDT (UTC-5), Zurich D

DNSSD (Extensions for Scalable DNS Service Discovery) WG 
Tuesday, 28 March 2017, 16:40 – 18:40 CDT (UTC-5), Zurich B

Follow Us

There’s a lot going on in Chicago, and whether you plan to be there or join remotely, there’s much to monitor. To follow along as we dole out this series of Rough Guide to IETF blog posts, follow us on the Internet Technology Matters blogTwitterFacebookGoogle+, via RSS, or see

The post Rough Guide to IETF 98: DNS Privacy and Security, including DNSSEC appeared first on Internet Society.

The Danger of Giving Up Social Media Passwords – So Many Other Services Are Connected

What’s the harm in giving up my Twitter password?“, you might say, “all someone can do is see my direct messages and post a tweet from me, right?

Think again. The reality today is that social media services are used for far more than just posting updates or photos of cats. They also act as “identity providers” allowing us to easily login to other sites and services. 

We’ve all seen the “Login with Twitter” or “Continue with Facebook” buttons on various sites. Or for Google or LinkedIn. These offer a tremendous convenience. You can rapidly sign into sites without having to remember yet-another-password.


… if you give your passwords to your social media accounts to someone, they could potentially[1]:

  • Impersonate you on social media accounts and post updates in your name.
  • Sign in to the comment sections of various news media sites and leave comments using your name.
  • Connect in to photo sites and see our photos, and modify or delete the photos, or post new ones in your name.
  • Sign in to e-commerce sites, view your orders and purchase items.
  • Login to video sites and see what videos you have watched, or post new ones to your account.
  • Login to your Medium account, view and change any articles you have written, add new comments as you.
  • Sign in to Goodreads, view all your books, see all the lists of what you want to read, view all your reviews and post reviews in your name.
  • Login to your Spotify account and learn all about what kind of music you like to listen to.

And that’s only a small number of examples.

We live in an era of highly-connected systems. And there are so many systems and services! The convenience of using our social media accounts to login is easy to understand.

But… if you give someone your password to a social media account, or are required to give your social media passwords to someone, you are giving them access to so much more than just that social media service.

What can you do?

1. Don’t give out your social media passwords!

2. Understand where your social media IDs are being used. In both Twitter and Facebook you can go into your “Settings” and choose “Apps” to see where you have granted access. You can revoke access there for sites and services you no longer use.

3. Think about whether you want to continue using your social media IDs in so many places. Does the convenience outweigh the issue of having so many services linked to one identity?

4. Enable 2-Factor Authentication on sites that offer this, which requires a second step beyond just your password to login. These are very easy to use, often using a phone or a small and inexpensive “dongle” that fits on your keyring.[2] Do note that this may not help if you are required by authorities to provide your social media passwords as they may require you provide the device used for two-factor authentication.

5. Use a password manager instead of using your social media ID to login to other sites,  which enables you to generate and use very strong passwords and access them all with one master password. There are many excellent free and paid options available for both computers and mobile devices, with a variety of features.

6. Spread the word. Help others understand how critically important our social media passwords are.

P.S. For more ideas, please see

[1] Depending upon how you have configured the service to work.

[2] The FIDO Alliance is a leader in this area, and a list of enabled sites and certified products is available on their site

The post The Danger of Giving Up Social Media Passwords – So Many Other Services Are Connected appeared first on Internet Society.

CITO Olaf Kolkman Speaking at RSA 2017 about IoT Security with Bruce Schneier

Today at the RSA Conference 2017 in San Francisco, our Chief Internet Technology Officer Olaf Kolkman will be speaking as part of a panel on:

Internet of Insecurity: Can Industry Solve It or Is Regulation Required?

The abstract of the session is:

The rise of IoT has brought forth a new generation of devices and services representing significant innovation, yet all too many ship insecure and are not supported over their life. They have become proxies for abuse with a capacity for causing significant harm. Can we wait for industry and stakeholders to adopt trust frameworks and seal programs or do we need government to step in?

The other panelist will be reknown security researcher Bruce Schneier and the moderator is Craig Spiezle, Executive Director and President of the Online Trust Alliance.

The panel starts at 8:00am Pacific (UTC-8) in the Moscone North 130 room. Unfortunately it is not being live streamed, but you can follow our @InternetSociety account on Twitter for live updates.

As background reading related to Internet of Things (IoT) security, I suggest:

If you are there at the RSA Conference today, please do visit this session and engage in the discussion.

If you are a journalist and would like to speak with Olaf more about this topic, please contact Allesandra Desantillana who is at the RSA Conference and can assist in connecting you with Olaf.

Please also watch this blog as we plan to post more information after the event.

The post CITO Olaf Kolkman Speaking at RSA 2017 about IoT Security with Bruce Schneier appeared first on Internet Society.

State of DNSSEC Deployment 2016 report shows over 89% of top-level domains signed

Did you know that 89% of top-level domains are now signed with DNSSEC? Or that over 88% of .GOV domains and over 50% of .CZ domains are signed? Were you aware that over 103,000 domains use DANE and DNSSEC to provide a higher level of security for email? Or that 80% of clients request DNSSEC signature records in DNS queries?

All these facts and much more are available in our new State of DNSSEC Deployment 2016 report.

For many years a wide variety of statistics about DNSSEC deployment have been available, but it’s been challenging to get an overall view. With this report our goal is to help people across the industry understand where the deployment of DNSSEC is at – and what challenges still need to be overcome.

To back up a bit, the “DNS Security Extensions”, or “DNSSEC”, provide a way to be sure you are communicating with the correct web site, service, or application. Before your mobile phone, laptop or other device connects to a site on the Internet, it must first obtain the correct IP address from the Domain Name System (DNS). Think of DNS similar to the “address book” you may have in your phone. You may look up “Dan York” in your contact list and call me – but underneath that your phone figures out the actual telephone number to call to reach me. DNS provides a similar directory function for the Internet.

The challenge is that there are ways an attacker can spoof the DNS results which could wind up with you connecting to the wrong site. Potentially you could wind up providing information to an attacker or downloading malware.

DNSSEC uses a system of digital signatures – and the checking of digital signatures (what we call “validation”) – to ensure that the information you get out of DNS is the same information that the operators of the domains put into DNS.

At a high level, this is what DNSSEC does – it makes sure you can trust the information you get from DNS. (You can read more on our DNSSEC Basics page.)

The basics of DNSSEC have been standardized for most of 20 years, but until the root zone of DNS was signed in 2010, there wasn’t much deployment. In the six years since, deployment has continued to grow. This report outlines that growth and provides a view into where that growth is happening and much more.

We encourage you to read and share this report widely. And if you haven’t yet started deploying DNSSEC validation on your own networks – or haven’t started signing your domains with DNSSEC – you can visit our Deploy360 Start page to find resources to help you begin.

Using DNSSEC allows us to have a higher level of trust in the domain names we use every day on the Internet. I hope you will join with me and others in deploying DNSSEC and building a more trusted Internet!

The post State of DNSSEC Deployment 2016 report shows over 89% of top-level domains signed appeared first on Internet Society.

How To Survive A DNS DDoS Attack – Consider using multiple DNS providers

How can your company continue to make its website and Internet services available during a massive distributed denial-of-service (DDoS) attack against a DNS hosting provider? In light of last Friday’s attack on Dyn’s DNS infrastructure, many people are asking this question.

One potential solution is to look at using multiple DNS providers for hosting your DNS records. The challenge with Friday’s attack was that so many of the affected companies – Twitter, Github, Spotify, Etsy, SoundCloud and many more – were using ONLY one provider for DNS services. When that DNS provider, Dyn, then came under attack, people couldn’t get to the servers running those services.  It was a single point of failure.  

You can see this yourself right now. If you go to a command line on a Mac or Linux system and type “dig ns,”[1] the answer you will see is something like:	10345  IN  NS	10345  IN  NS	10345  IN  NS	10345  IN  NS

What this says is that Twitter is using only Dyn. (“” is the domain name of Dyn’s “DynECT” managed DNS service.)

Companies using Dyn who also used another DNS provider, though, had less of an issue. Users may have experienced delays in initially connecting to the services, but they were still able to eventually connect.  Here is what Etsy’s DNS looks like after Friday (via “dig ns”):	9371  IN  NS	9371  IN  NS	9371  IN  NS	9371  IN  NS	9371  IN  NS	9371  IN  NS	9371  IN  NS	9371  IN  NS

Etsy is now using a combination of Dyn’s DynECT DNS services and Amazon’s Route 53 DNS services.

But wait, you say… shouldn’t this be “DNS 101”?

Aren’t you always supposed to have DNS servers spread out across the world?
Why don’t they have “secondary DNS servers”?
Isn’t that a common best practice?

Well, all of these companies did have secondary servers, and their DNS servers were spread out all around the world. This is why users in Asia, for instance, were able to get to Twitter and other sites while users in the USA and Europe were not able to do so.

So what happened? 

It gets a bit complicated.

20 Years Ago…

Jumping back, say, 20 years or so, it was common for everyone to operate their own “authoritative servers” in DNS that would serve out their DNS records. A huge strength of DNS that it is “distributed and de-centralized” and anyone registering a domain name is able to operate their own “authoritative servers” and publish all of their own DNS records. 

To make this work, you publish “name server” (“NS”) records for each of your domain names that list which DNS servers are “authoritative” for your domain. These are the servers that can answer back with the DNS records that people need to reach your servers and services. 

You need to have at least one authoritative server that would give out your DNS records. Of course, in those early days if there was a problem with that server and it went offline, people would not be able to get the DNS records that would get them to your other computers and services.  Similarly you could have a problem with your connection to the Internet and people could not get to your authoritative server.

For that reason the best practice emerged of having a “secondary” authoritative DNS server that contained a copy of all of the DNS records for your domain. The idea was to have this in a different geographic location and on a different network.

On the user end, we use what is called a “recursive DNS resolver” to send out DNS queries and get back the IP addresses that our computers need to connect. Our DNS resolvers will get the list of name servers (“NS records”) and choose one to connect to. If an answer doesn’t come back after some short period of time, the resolver will try the next NS record, and the next… until it runs out of NS records to try. 

Back in July 1997, the IETF published RFC 2821 dedicated to this topic: Selection and Operation of Secondary DNS Servers. It’s fun to go back and read through that document almost 20 years later as a great bit has changed. But back in the day, this was a common practice:

 The best approach is usually to find an organisation of similar size, and agree to swap secondary zones – each organization agrees to provide a server to act as a secondary server for the other organisation’s zones. 

As noted in RFC 2821, it was common for people to have 2, 3, 4 or even more authoritative servers. One would be the “primary” or master server where changes were made – the others would all be “secondary” servers grabbing copies of the DNS records from the primary server.

Over the years, companies and organizations would spend a great amount of time, energy and money building out their own DNS server infrastructure.  Having this kind of geographic and network resilience was critical to ensure that users and customers could get the DNS records that would get them to the organizations servers and services.

The Emergence of DNS Hosting Providers

But most people really didn’t want to run their own global infrastructure of DNS servers. They didn’t want to deal with all the headaches of establishing secondary DNS servers and all of that. It was costly and complicated – and just more than most companies wanted to deal with. 

Over time companies emerged that were called “DNS hosting providers” or “DNS providers” who would take care of all of that for you. You simply signed up and delegated operation of your domain name to them – and they did everything else. 

The advantages were – and are today – enormous. Instead of only a couple of secondary DNS servers, you could have tens or even hundreds.  Technologies such as anycast made this possible. The DNS hosting provider would take care of all the data center operation, the geographic diversity, the network diversity… everything.  And they provided you with all this capability on a global and network scale that very few companies could provide all by themselves. 

The DNS hosting providers gave you everything in the RFC 2821 best practices – and so much more!

And so over the past 10 years most companies and people moved to using DNS hosting providers of some form. Often individuals simply use the DNS hosting provided by whatever domain name registrar they use to register their domain name.  Companies have outsourced their DNS hosting to companies such as Dyn, Amazon’s Route 53, CloudFlare, Google’s Cloud DNS, UltraDNS, Verisign and so many more. 

It’s simple and easy … and probably 99.99% of the time it has “just worked”.

And you only needed one DNS provider because they were giving you all the necessary secondary DNS services and diversity protection.

Friday’s Attack

Until Friday. When for some parts of the Internet the DNS hosting services of Dyn didn’t work. 

It’s important to note that Dyn’s overall DNS network still worked. They never lost all their data centers to the attack. People in some parts of the world, such as Asia, continued to be able to get DNS records and connect to all the affected services without any issues.

But on Friday, all the many companies and services that were using Dyn as their only DNS provider suddenly found that a substantial part of the Internet’s user community couldn’t get to their sites. They found that they were sharing the same fate as their DNS provider in a way that would not have been true before the large degree of centralization with DNS hosting providers.

Some companies, like Twitter, stayed with Dyn through the entire process and weathered the storm. Others, like Github, chose to migrate their DNS hosting to another provider.  Still others chose to start using multiple DNS providers. 

Why Doesn’t Everyone Just Use Multiple DNS Providers? 

This would seem the logical question.  But think about that for a second – each of these major DNS providers already has a global, distributed DNS architecture that goes far beyond what companies could provide in the past.

Now we want to ask companies to use multiple of these large-scale DNS providers?

I put this question out in a number of social networks and a friend of mine whose company was affected nailed the issue with this comment:

Because one DNS provider, with over a dozen points-of-presence (POPs) all over the world and anycast, had been sufficient, up until this unprecedented DDoS. We had eight years of 100% availability from Dyn until Friday. Dealing with multiple vendors (and paying for it) didn’t have very good ROI (and I’m still not sure it does, but we’ll do it anyway). 

Others chimed in and I can summarize the answers as:

  • CDNs and GLBs – Most websites no longer sit on a single web server publishing a simple set of HTML files. They are large complex beasts pulling in data from many different servers and sites. And they very often sit behind content delivery networks (CDNs) that cache website content and make it available through “local” servers or global load balancers (GLBs) that redirect visitors to different servers. Most of these CDNs and GLBs work by using DNS to redirect people to the “closest” server (chosen by some algorithm). When using a CDN or GLB, you typically wind up having to use only that service for your DNS hosting.  I’ve found myself in this situation with a few of my own sites where I use a CDN.
  • Features – Many companies use more sophisticated features of DNS hosting providers such as geographic redirection or other mechanisms to manage traffic. Getting multiple providers to modify DNS responses in exactly the same way can be difficult or impossible.
  • Complexity – Beyond CDNs and features, multiple DNS providers simply adds complexity into IT infrastructure. You need to ensure both providers are publishing the same information, and getting that information out to providers can be tricky in some complex networks.
  • Cost – The convenience of using a DNS hosting provider comes at a substantial financial cost. For the scale needed by major Internet services, the DNS providers aren’t cheap. 

For all of these reasons and more, it’s not an easy decision for many sites to move to using multiple DNS providers.

It’s complicated.

And yet… 

And yet the type of massive DDoS attacks we saw on Friday may require companies and organizations to rethink their “DNS strategy”. With the continued deployment of the Internet of Insecure Things, in particular, these type of DDoS attacks may become worse before the situation can improve. (Please read Olaf Kolkman’s post for ideas about how we move forward.) There will be more of these attacks.

As my friend wrote in further discussion:

  These days you outsource DNS to a company that provides way more diversity than anyone could in the days before anycast, but the capacity of botnets is still greater than one of the biggest providers, and probably bigger than the top several providers combined.

 And even more to the point:

  The advantage of multiple providers on Friday wasn’t network diversity, it was target diversity.

The attackers targeted Dyn this time, so companies who use DNS services from Amazon, Google, Verisign or others were okay.  Next time the target might be one of the others. Or perhaps attackers may target several.

The longer-term solutions, as Olaf writes about, involve better securing all the devices connected to the Internet to reduce the potential of IoT botnets. They involve the continued work collaboratively to reduce the effects of malware and bad routing info (ex. MANRS).  They involve the continued and improved communication and coordination between network operators and so many others.

But in the meantime, I suspect many companies and organizations will be considering whether it makes sense to engage with multiple DNS providers.  For many, they may be able to do so. Others may need the specialized capabilities of specific providers and find themselves unable to use multiple providers. Some may not find the return on investment warrants it. While others may accept that they must do this to ensure that their services are always available.

Sadly, taking DNS resilience to an even higher level may be what is required for today.

What do you think? Do you use multiple DNS providers?  If so, what worked for you? If not, why not? I would be curious to hear from readers, either as comments here or out on social networks.


[1] Windows users do not have the ‘dig’ command by default. Instead you can type “nslookup -type=NS <domainname>”. The results may look different that what is shown here, but will have similar information.

NOTE: I want to thank the people who replied to threads on this topic on Hacker News, in the /r/DNS subreddit and on social media. The comments definitely helped in expanding my own understanding of the complexities of the way DNS providers operate today.

Image credit: a photo I took of a friend’s T-shirt at a conference.

The post How To Survive A DNS DDoS Attack – Consider using multiple DNS providers appeared first on Internet Society.

Heading to Romania to ION Bucharest for DNSSEC, IPv6, routing security and more

ION Bucharest template 660px

This week I will briefly be in Bucharest, Romania, for the Internet Society's ION Bucharest conference. We've got a great set of sessions on the agenda, including:

  • Deploying DNSSEC
  • Romanian DNSSEC Case Study
  • Let's Encrypt & DANE
  • Mind Your MANRS & the Routing Resilience Manifesto
  • The Case for IPv6
  • IPv6 Success Stories
  • What's Happening at the IETF? Internet Standards and How To Get Involved

I will have two roles in the event tomorrow:

I enjoy doing the production of live video streams and so this should be a good bit of fun (it's also intense work in the midst of it).

You can WATCH LIVE starting at 14:00 EEST (UTC+3, or 7 hours ahead of the US East Coast where I live).

The sessions will also be recorded for later viewing.

It will be a short trip for me. I'm currently (Tuesday morning) writing this from the Munich airport. I land in Bucharest tonight. The event is tomorrow - and then I fly home Thursday afternoon.

Despite the short visit, I'm looking forward to it - it should be a great event!

An audio commentary on this topic is also available:

Photo credit: Nico Trinkhaus on Flickr - CC BY NC

ISOC@OECD, Day 2: Kathy Brown’s speech about trust, Hiroshi Esaki speaking about innovation

Today is the first day of the “Ministerial Conference” section of the OECD Ministerial Meeting on the Digital Economy.  Yesterday was for the very successful “Stakeholder Forums” and my colleague Nicolas Seidler wrote about the ITAC Forum that discussed Internet policies, IPv6, IoT, open standards and Collaborative Security.  I also encourage you to read our OECD Ministerial Background Paper to understand why this meeting is so important for Internet Governance.

11:40 am – OECD Stakeholders Armchair Discussion

Our big event today will be the “OECD Stakeholders’ Armchair Discussion”  where our President and CEO Kathy Brown will speak as a member of the Internet Technical Advisory Committee (ITAC) about what was discussed in the ITAC Forum yesterday and also about the view from within the technical community about the need to increase trust in the Internet.

The overall session she is in starts at 11:40 am local time (UTC-5, similar to US Central time) although we are told the armchair discussion should start closer to 12:20 pm.  Each of the four stakeholder advisory committees will provide a statement, and Kathy will be speaking on behalf of ITAC.

16:45 – Stimulating Digital Innovation across the Economy

After Kathy’s session there will be a 1.5 hour lunch break and then the parallel track sessions begin.  The OECD Ministerial Agenda outlines the sessions, including:

  • Economic and Social Benefits of Internet Openness
  • Consumer Trust and Market Growth
  •  Stimulating Digital Innovation across the Economy
  • Managing Digital Security and Privacy Risk for Economic and Social Prosperity

While all of the sessions are of interest, our attention will be on the session about “Stimulating Digital Innovation” at 16:45 as ISOC Board of Trustees member Hiroshi Esaki will be one of the speakers on the panel.

We understand that the sessions should be live streamed, but we are uncertain of the exact URL.  We would advise you to visit the OECD live stream page to see what streams are available.

You can also follow our @InternetSociety Twitter account where we will be providing updates using the #OECDdigitalMX hashtag.

Watch this blog, too, as we will be posting several more articles throughout the day!

The post ISOC@OECD, Day 2: Kathy Brown’s speech about trust, Hiroshi Esaki speaking about innovation appeared first on Internet Society.

Watch Live on Friday, 29 April – Kathy Brown At G7 ICT Multi-Stakeholder Conference

On Friday, April 29, you can watch leaders of the technical community, business and civil society address the G7 ICT Ministers at:

The Multi-Stakeholders Conference begins at 9:00 am Japan Standard Time (UTC+9), which is:

  • midnight UTC
  • 2:00 am Central European Time
  • 8:00 pm, Thursday, April 28, Eastern Daylight Time

Internet Society President and CEO Kathy Brown will speak as part of a panel starting at 10:45 am JST. The panel topic is “Sharing common thoughts about Internet governance and cybersecurity“. The other panelists are senior executives from Hitachi, NTT and BT Security. Kathy has published her thoughts about what she will say in the session.

The full agenda for the Multi-Stakeholder Conference is available on the G7 event site.

In preparation for the session, we encourage you to read:

During the event you can also follow our tweets on @ISOCPolicy .

The post Watch Live on Friday, 29 April – Kathy Brown At G7 ICT Multi-Stakeholder Conference appeared first on Internet Society.