Category: Technology

AVFTCN 036 – A Hard Drive Failure, a Puppy, Climatebase, a New Role… and an Unplanned Hiatus

Hi! Remember this newsletter you subscribed to? The one you haven’t seen since back on April 22, 2024?

Welllllll… it’s been a wild ride… and an unplanned hiatus.

But… my goal now is to get back into this a bit more, and so I want to fire off this short note to just give a personal update.

Next time, I’ll be climbing up into that proverbial crow’s nest and looking out at the horizon ahead of us. For this newsletter… I’m going to look back instead at where the ship has sailed over the past few months.

If you visit my danyork.me aggregation site, you’d see that I have published some work articles… but nothing personal since the last issue (035) of this newsletter back on April 22, 2024. No blog posts. No podcast episodes. No newsletters. No livestreaming to Twitch. No… nothing. Zip. Nada. Zilch. Nichts.

So here’s the story…

The fun began back in early May when the 2017 iMac I’d been using to produce my podcasts, do all my livestreaming, and write many of my posts started acting really funky. It was acting very slow.. and freezing completely. Many reboots and upgrade attempts later, I was finally able to identify that it was having “S.M.A.R.T. disk errors”. Which wasn’t good.

“It’s dead, Jim” would be the Star Trek (Original Series) way to say it.

Given that it’s from 2017, I can’t upgrade it to the latest MacOS X. And heck, it’s so old Apple won’t even give me anything for a trade-in. (But they’ll help recycle it for me if I want. 🤦‍♂️) Giving other things going on (see below!) fixing it has been super LOW priority… which has meant that my normal platform for content creation and production has been offline. So no podcast episodes or livestreaming.


As that was all happening, a more massive disruption was entering our life – we adopted an 8-week-old puppy!

Named “Barkley”, he’s a very lovable and adorable mix of a pug and a miniature schnauzer (apparently called a “Schnug”) who keeps growing and growing and now at 6 months old is larger than our 17-year-old miniature poodle.

But.. my wife and I had never had a puppy before! We’ve had a couple of dogs (and cats) but they’ve all been a few years old when we adopted them. Anyone who has had a puppy probably understands what these last months have been like! Constantly watching where he is… “puppy-proofing” the entire house… constantly watching where he is… stopping him from eating everything… trying to train him a bit… constantly watching where he is… trying to prevent him from always attacking our 17yo dog… stopping him from eating whatever… oh, and constantly watching what he’s doing. It’s been… exhausting!

Basically like having a newborn child again… only one that can run fast all over and has sharp teeth! 😀

But in the end… he’s a wonderful addition and we’ve come to love him dearly. He’s curled up against my foot as I write this… and tomorrow night you’ll find me in a class with him.


While all of this was going on, I was also enrolled as a Climatebase Fellow in a very intense 12-week program of 10-15 hours of sessions each week all related to improving my understanding of the current state of information and science around climate change. As shown in the image below, it covered a very wide range of topics:

A grid of 12 blocks showing the 12 weeks of the program. At the top is the title "Climatebase Fellowship program overview by weeks"

Participating in this fellowship was part of my professional development at my employer, the Internet Society. In the last issue, I pointed to my article about “The Internet and Climate Change” and that continues to be an area of great interest and exploration for me. I’ll undoubtedly write some future newsletters specifically around this whole area.

My interest was mostly to refresh my understanding of current climate thinking. I’ve been involved in “environmental” issues since the 1980s, and was very active in the broader movement in the early 1990s, serving in different volunteer leadership roles for different organizations. But then life took me away from that heavy involvement and my knowledge has aged. I heard phrases like “regenerative agriculture” but didn’t know what they meant. (Now I do!)

It was a good program and I met some great people and enjoyed participating in the community (which I am still doing).


As we came into summer here in Vermont, the Climatebase Fellowship wrapped up, but the intensity of the puppy and work and family and everything else continued.

And then a very cool opportunity was presented to me… at the Internet Society we had a new President and CEO, Sally Wentworth, start on September 1, 2024. Back in July she approached me about taking on a new role that we eventually called “CEO communications” where I’m helping with developing and executing plans across both the Internet Society and Internet Society Foundation for consistent communication from the CEO’s office internally, externally, and with our community and partners.

I formally took on this role on September 1 (and mentioned something on LinkedIn later) but began some aspects of it back in August. I’ve known Sally for the 13 years that I’ve been at the Internet Society (she’s been there 15 years) and have deep and great respect for her. So I’ve been excited about the new role, grateful for this opportunity to stretch my own skills in new and different ways… and just… busy! 😀

And now on this 24th day of September… it’s time to get back in the flow again and start creating some content again. There are so many stories to tell… so many changes happening… so much ahead on the horizon… both icebergs to avoid and opportunities to explore!

Time to climb up into that crow’s nest, whip out the spyglass, and get back to looking out ahead at the horizon and sharing what I see!

See you soon!

[The End]


Recent Posts and Podcasts

Here is some of the content I’ve published and produced recently on my personal sites:

  • <nothing!>

[I do still contribute reports to the “monthly” episodes of the For Immediate Release podcast.]

I did publish some new posts for the Internet Society (who has no connection to this newsletter):

More on why so many of them are law-related… in a future newsletter!


Thanks for reading to the end. I welcome any comments and feedback you may have.

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 031* – Book: “Making A Metaverse That Matters”

I have always been intrigued by “what comes after the web browser” for how we interact with information online. After all, the web browser has been our primary way of interacting with information since the early 1990s. Even when we are using “apps” on our phones today, in many cases they are effectively web browsers wrapped in a customized layer.

Could there be something more? Could we get to a “3 dimensional” view of information? Could we start interacting more in virtual worlds? Using “virtual reality” (VR)? Or this thing called a “metaverse”?

Some of you may remember VRML from the mid-1990s. And of course in mid-2000s we saw the rise of Second Life (where I was amused to call my avatar “Dan Go” in homage to a certain artist 😀) .. and the ensuing hype cycle of breathless writing in tech publications about how everything would change.

And of course it didn’t… or hasn’t yet. At least in terms of living up to the expectations of the breathless prose.

Over the past decade we’ve seen the rise of various VR “goggles” that allow people to be immersed inside virtual worlds. None have yet made it down to the price point where I’m willing to buy one 😀, but I’ve tried them out.

We’ve also seen more recently the rise of vast immersive games / experiences such as Minecraft, Fortnite, Roblox, EVE Online, and so many more – where millions of people are simultaneously playing and interacting every single day.

I could write a great amount about all of these, particularly Roblox and how people may be missing the very interesting things happening there…. but that’s all for another time.

Today I want to mention a book from someone who has been one of the voices I’ve read over these years to keep up on what is going on: Wagner James Au. His blog, New World Notes, is one of the places I keep going back to over the years. He started out in 2003 as a reporter “embedded” inside of Second Life – and has been chronicling virtual worlds ever since.

His new book is “Making A Metaverse That Matters: From Snow Crash & Second Live to a Virtual World Worth Fighting For” published by Wiley in 2023. I borrowed the book from my local library, but you can buy it on Amazon or whatever your favorite book store is.

I like that he begins with this definition:

The Metaverse is a vast, immersive virtual world simultaneously accessible by millions of people through highly customizable avatars and powerful experience creation tools integrated with the offline world through its virtual economy and external technology.

That works for me. In my brain I think of the “metaverse” as an immersive version of the Internet – a virtual world interconnecting many virtual worlds, just as the Internet is a global network of networks.

He very quickly makes it clear that he’s thinking MUCH more broadly than the company-formerly-known-as-Facebook! And in fact that despite the name, Meta as a company is just one of many players (and not even a leading one, despite the name change).

The first hundred pages or so trace the origins of the “metaverse” concept from Neal Stephenson’s Snow Crash novel in 1992 on up through Second Life and into the Facebook era. Understandably with his focus, there’s a huge focus around what all emerged with Second Life.

The second 80-ish pages explore the newer entrants: Minecraft, Roblox, Fortnite, VRChat, and Neal Stephenson’s own Lamina1. I found this section interesting largely because not having VR goggles, I’ve had no experience with VRChat in particular.

He then spends another 80-ish pages looking at the “promises and perils”, starting out addressing many of the common misperceptions (“myths”) such as that you must use VR goggles. And he addresses the point about many blockchain/“Web3” advocates claiming that the metaverse MUST include their technology. (Nope! They aren’t needed.) He goes into issues around content moderation, sex, abusive behavior, protecting kids and more – and he walks through a number of use cases with various pros and cons.

Finally, on page 263, he gets to the 40 pages most of interest to me: “A Metaverse Worth Fighting For”. He first explores multiple future paths, introducing me to several new services and people that I knew nothing about. He talk about generational issues, AI, mobile usage and more. Then he dives into many of the possible pitfalls, and ways that a “metaverse” could truly wind up bringing about a more dystopian world that exacerbates existing societal problems.

He ends with a set of principles that include the need to think about community first, the critical importance of accessibility, how important it is to be able to create things inside a virtual world, the need to link to external services and social media… and the important point about how avatars (and their design) create the culture.

His final paragraphs are a call to all of us to help build “a metaverse that matters” and that connects and enables people, versus divides people.

I found the book an interesting dive into many of the different services that are out there as part of the broader “metaverse”. It was a great history and overview of so many different services.

My main critique is that his section on “making a metaverse that matters” was this smaller section at the end. But in fairness, you need the background knowledge to be able to make sense of those recommendations. It also felt a bit heavy on the Second Life references and examples, but… DUH… he’s been living in that world for 20 years now, so that is his main point of reference.

Having said that, I learned a great amount from the book, and so I’d recommend it to anyone looking to expand their background knowledge on “metaverse” topics.

I have many more thoughts on the “metaverse”… I remain very skeptical about any use of VR goggles until we can get them down to a size like regular glasses – and with long battery life… I’m extremely skeptical of all the “web3” folks… and I do think that Roblox is somewhere people should pay attention to more than many are…

…but I’ll save those for future newsletters. Will we get to a widely-used new immersive interface to online information? I don’t know… but the experiments are certainly fascinating!

I wish you all the best as 2023 draws to a close and 2024 begins!

P.S. I note that Wagner James Au has some metaverse predictions for 2024.


Thanks for reading to the end. I welcome any comments and feedback you may have. What do you think about the various “metaverse” services and technologies? What did you think about this book if you read it?

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


* – Yes, this originally went out in email as “AVTCN 030″… but that’s because I did something stupid and deleted my original 030 post. 🤦‍♂️


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

Celebrating 50 Years of the RFCs That Define How the Internet Works

First page of RFC 1

50 years ago today, on 7 April 1969, the very first “Request for Comments” (RFC) document was published. Titled simply “Host Software”, RFC 1 was written by Steve Crocker to document how packets would be sent from computer to computer in what was then the very early ARPANET. [1]

Steve and the other early authors were just circulating ideas and trying to figure out how to connect the different devices and systems of the early networks that would evolve into the massive network of networks we now call the Internet. They were not trying to create formal standards – they were just writing specifications that would help them be able to connect their computers. Little did they know then that the system they developed would come to later define the standards used to build the Internet.

Today there are over 8,500 RFCs whose publication is managed through a formal process by the RFC Editor team. The Internet Engineering Task Force (IETF) is responsible for the vast majority (but not all) of the RFCs – and there is strong process through which documents move within the IETF from ideas (“Internet-Drafts” or “I-Ds”) into published standards or informational documents[2].

50 years ago, one of the fundamental differences of the RFC series from other standards at the time was that:

  • anyone could write an RFC for free.
  • anyone could read the RFCs for free. They were open to all to read, without any fee or membership

As Steve Crocker notes in his recollections, this enabled the RFC documents to be widely distributed around the world, and studied by students, developers, vendors and other professionals. This allowed people to learn how the ARPANET, and later the Internet, worked – and to build on that to create new and amazing services, systems and software

This openness remains true today. While the process of publishing a RFC is more rigorous, anyone can start the process. You are not required to be a member (or pay for a membership) to contribute to or approve standards.  And anyone, anywhere, can read all of the RFCs for free.  You do not have to pay to download the RFCs, nor do you have to be a member of any organization. 

More than anything, this open model of how to work together to create voluntary open standards is perhaps the greatest accomplishment of the RFC process. The Internet model of networking has thrived because it is built upon these open standards.

Standards may come and go over time, but the open way of working persists.

While we may no longer use NCP or some of the other protocols defined in the early RFCs, we are defining new protocols in new RFCs.  The next 1,000s RFCs will define many aspects of the Internet of tomorrow.[3]

We may not know exactly how that future Internet will work, but it’s a pretty good guess that it will be defined in part through RFCs.



[1] See our History of the Internet page for more background.

[2] For more explanation of the different types of RFCs, see “How to Read a RFC“.

[3] As noted in our 2019 Global Internet Report section on “Takeaways and Observations”, we are concerned that an increasing number of new services and applications on the Internet are relying on application programming interfaces (APIs) controlled by the application or platform owner rather than on open standards defined by the larger Internet community.

The post Celebrating 50 Years of the RFCs That Define How the Internet Works appeared first on Internet Society.

Celebrating the 30th Anniversary of the World Wide Web

http://line-mode.cern.ch/www/hypertext/WWW/TheProject.html

Back around 1991, I was traveling throughout the eastern USA teaching an “Introduction to the Internet” course I had written. The students were mainly from telecom, financial, and software companies wanting to know what this Internet thing was all about. I taught about IP addresses and DNS, using email, sending files with FTP,  using archie and veronica to find info, engaging in USENET discussions, and using Gopher to explore “gopherspace”.

At the end of the course, one of the final sections was on “emerging technologies”. And there, nestled in with HyTelnet and WAIS, was one single page about this new service called the “World-Wide Web”.

And all the page really said was: telnet to info.cern.ch, login as “www”, and start pressing numbers to follow links on the screen.


That was it! (and you can still experience that site today)

We had no idea in those very early days that what we were witnessing was the birth of a service that would come to create so much of the communication across the Internet.

In only a few short years, of course, I was teaching new courses on “Weaving the Web: Creating HTML Documents” and “Navigating the World-Wide Web using Netscape Navigator“. And all around us there was an explosion of content on the Internet as “everyone” wanted to create their own websites.

The Web enabled anyone to publish and to consume content (assuming they could get access to the Internet). Content finally broke free from the “walled gardens” of the proprietary commercial online services such as CompuServe, AOL, Prodigy, and others. The Web brought an open layer of publishing, communication, and commerce to the gigantic open network of networks that is the Internet. It was a perfect example of the “permissionless innovation” allowed by an open, globally-connected Internet, where no one has to ask permission before creating new services.

Whole new industries were born, while others faded away. New words entered our vocabulary. (ex. before the Web, who used the word “browser”?) New opportunities emerged for so many people around the world. Lives were changed. Education changed. Economies changed. The very fabric of our society changed.

While it is true that the Web could not exist without the Internet, the Internet would not be as amazing as it is without the Web.

And so on this momentous day, we join with the people at CERN, the World Wide Web Consortium (W3C), the World Wide Web Foundation, Tim Berners-Lee, and so many others in celebrating the 30th anniversary of the Web.

The path forward for the next 30 years of the Web, which relies on the Internet to flourish, is not so clear. It is a challenging time for the Internet. And the intensity of the consolidation and centralization within the Internet economy has caused Tim Berners-Lee himself to issue a call to re-decentralize the Web

But for today, let us focus on all the good the Web has brought to the Internet, all the people it has helped, all the lives it has transformed.

Happy 30th birthday to the Web!


Image credit: CERN’s re-created info.cern.ch.

The post Celebrating the 30th Anniversary of the World Wide Web appeared first on Internet Society.

New Internet Draft: Considerations on Internet Consolidation and the Internet Architecture

Swirling vortex of stars

Are there assumptions about the Internet architecture that no longer hold in a world where larger, more centralized entities provide big parts of the Internet service? If the world changes, the Internet and its technology/architecture may have to match those changes. It appears that level[ing] the playing field for new entrants or small players brings potential benefits. Are there technical solutions that are missing today?

These questions were one of many asked in a new Internet Draft published yesterday by former IETF Chair Jari Arkko on behalf of several Internet Architecture Board (IAB) members with the title “Considerations on Internet Consolidation and the Internet Architecture”:

https://tools.ietf.org/html/draft-arkko-iab-internet-consolidation-00

The draft text is based on the IAB “Consolidation” blog post back in March 2018as well as a new post Jari and Brian Trammell have written for the APNIC and RIPE sites.

The abstract of the Internet Draft is:


Many of us have held a vision of the Internet as the ultimate distributed platform that allows communication, the provision of services, and competition from any corner of the world. But as the Internet has matured, it seems to also feed the creation of large, centralised entities in many areas. This phenomenon could be looked at from many different angles, but this memo considers the topic from the perspective of how available technology and Internet architecture drives different market directions.


The document discusses different aspects of consolidation including economic and technical factors. It ends with a section 3, “Actions,” that lists these questions and comments for discussion:

  •  Are there assumptions about the Internet architecture that no longer hold in a world where larger, more centralised entities provide big parts of the Internet service? If the world changes, the Internet and its technology/architecture may have to match those changes. It appears that level the playing field for new entrants or small players brings potential benefits. Are there technical solutions that are missing today?
  • Assuming that one does not wish for regulation, technologies that support distributed architectures, open source implementations of currently centralised network functions, or help increase user’s control can be beneficial. Federation, for example, would help enable distributed services in situations where smaller entities would like to collaborate.
  • Similarly, in an asymmetric power balance between users and services, tools that enable the user to control what information is provided to a particular service can be very helpful. Some such tools exist, for instance, in the privacy and tracking-prevention modes of popular browsers but why are these modes not the default, and could we develop them further?
  • It is also surprising that in the age of software-defined everything, we can program almost anything else except the globally provided, packaged services. Opening up interfaces would allow the building of additional, innovative services, and better match with users’ needs.
  • Silver bullets are rare, of course. Internet service markets sometimes fragment rather than cooperate through federation. And the asymmetric power balances are easiest changed with data that is in your control, but it is much harder to change when someone else holds it. Nevertheless, the exploration of solutions to ensure the Internet is kept open for new innovations and in the control of users is very important.
  • What IETF topics that should be pursued to address some of the issues around consolidation?
  • What measurements relating to the developments centralization or consolidation should be pursued?
  • What research – such as distributed Internet architectures – should be driven forward?

These are all excellent questions, many of which have no easy answers. The draft encourages people interested in this topic to join the IAB’s “architecture-discuss” mailing list (open to anyone interested to subscribe) as one place to discuss this. This is all part of the ongoing effort by the IAB to encourage a broader discussion on these changes that have taken place to the way in which the Internet operates.

It is great to see this Internet Draft and I do look forward to the future discussions to see what actions or activities may emerge. It’s a challenging issue. As the draft discusses, there are both positive and negative aspects to consolidation of services – and the tradeoffs are not always clear.

This broader issue of consolidation or centralization has been an area of interest for us at the Internet Society for quite some time, dating back to our “future Internet scenarios” in 2008 and even before. More recently, our Global Internet Report 2017 on the “Paths to Our Digital Future” recognized the concerns – so much so that we decided to focus our next version of the GIR on this specific topic. (Read our 2018 GIR concept note).

Beyond the Global Internet Report, we’ve published articles relating to consolidation – and it’s been a theme emerging in several of our “Future Thinking” posts. I know that we will continue to write and speak about this theme because at its core it is about the future of what we want the Internet to be.

Please do join in these conversations. Share this Internet Draft with others. Share our 2017 Global Internet Report. Engage in the discussions. Help identify what the issues may be – and what solutions might be.

The Internet must be for everyone. Together we can #ShapeTomorrow.


Image credit: a cropped section of a photo by Paul Gilmore on Unsplash

The post New Internet Draft: Considerations on Internet Consolidation and the Internet Architecture appeared first on Internet Society.

Rough Guide to IETF 101: DNSSEC, DANE, DNS Security and Privacy

It’s going to be a crazy busy week in London next week in the world of DNS security and privacy! As part of our Rough Guide to IETF 101, here’s a quick view on what’s happening in the world of DNS.  (See the full agenda online for everything else.)

IETF 101 Hackathon

As usual, there will be a good-sized “DNS team” at the IETF 101 Hackathon starting tomorrow. The IETF 101 Hackathon wiki outlines the work (scroll down to see it). Major security/privacy projects include:

  • Implementing some of the initial ideas for DNS privacy communication between DNS resolvers and authoritative servers.
  • Implementation and testing of the drafts related to DNS-over-HTTPS (from the new DOH working group).
  • Work on DANE authentication within systems using the DNS Privacy (DPRIVE) mechanisms.

Anyone is welcome to join us for part or all of that event.

Thursday Sponsor Lunch about DNSSEC Root Key Rollover

On Thursday, March 22, at 12:30 UTC, ICANN CTO David Conrad will speak on “Rolling the DNS Root Key Based on Input from Many ICANN Communities“. As the abstract notes, he’ll be talking about how ICANN got to where it is today with the Root KSK Rollover – and about the open comment period on the plan to roll the KSK in October 2018.

David’s session will be streamed live for anyone wishing to view remotely.

DNS Operations (DNSOP)

The DNS sessions at IETF 101 really begin on Tuesday, March 20, with the DNS Operations (DNSOP) Working Group from 15:50 – 18:20 UTC. Several of the drafts under discussion will relate to the Root KSK Rollover and how to better automate and monitor key rollovers. DNSOP also meets on Thursday, March 22, from 18:10-19:10, where one draft of great interest will be draft-huque-dnsop-multi-provider-dnssec. This document explores how to deploy DNSSEC in environments where multiple DNS providers are in use. As per usual, given the critical role DNS plays, the DNSOP agenda has many other drafts up for discussion and action.

DNS PRIVate Exchange (DPRIVE)

The DPRIVE working group meets Wednesday afternoon from 13:30-15:00 UTC.  As shown on the agenda, there will be two major blocks of discussion. First, Sara Dickinson will offer recommendations for best current practices for people operating DNS privacy servers. This builds off of the excellent work she and others have been doing within the DNS Privacy Project.

The second major discussion area will involve Stephane Bortzmeyer discussing how to add privacy to the communication between a DNS recursive resolver and the authoritative DNS server for a given domain.  When the DPRIVE working group was first chartered, the discussion was whether to focus on the privacy/confidentiality between a stub resolver and the local recursive resolver; or between the recursive resolver and authoritative server; or both. The discussion was to focus on the stub-to-recursive-resolver connection – and that is now basically done from a standards perspective. So Stephane is looking to move the group on into the next phase of privacy. As a result, the session will also include a discussion around re-chartering the DPRIVE Working Group to work on this next stage of work.

Extensions for Scalable DNS Service Discovery (DNSSD)

On a similar privacy theme, the DNSSD Working Group will meet Thursday morning from 9:30-12:00 UTC and include a significant block of time discussing privacy and confidentiality.  DNSSD focuses on how to make device discovery easier across multiple networks. For instance, helping you find available printers on not just your own network, but also on other networks to which your network is connected. However in doing so the current mechanisms expose a great deal of information. draft-ietf-dnssd-privacy-03 and several related drafts explore how to add privacy protection to this mechanism. The DNSSD agenda shows more information.

DNS-Over-HTTPS (DOH)

IETF 101 will also feature the second meeting of one of the working groups with the most fun names – DNS Over HTTPS or… “DOH!” This group is working on standardizing how to use DNS within the context of HTTPS. It meets on Thursday from 13:30-15:30. As the agenda indicates, the focus is on some of the practical implementation experience and the work on the group’s single Internet-draft: draft-ietf-doh-dns-over-https.

DOH is an interesting working group in that it was formed for the express purpose of creating a single RFC. With that draft moving to completion, this might be the final meeting of DOH – unless it is rechartered to do some additional work.

DNSSEC Coordination informal breakfast meeting

Finally, on Friday morning before the sessions start we are planning an informal gathering of people involved with DNSSEC. We’ve done this at many of the IETF meetings over the past few years and it’s been a good way to connect and talk about various projects. True to the “informal” nature, we’re not sure of the location and time yet (and we are not sure if it will involve food or just be a meeting). If you would like to join us, please drop me an email or join the dnssec-coord mailing list.

Other Working Groups

DANE and DNSSEC will also appear in the TLS Working Group’s Wednesday meeting. The draft-ietf-tls-dnssec-chain-extension will be presented as a potential way to make DANE work faster by allowing both DANE and DNSSEC records to be transmitted in a single exchange, thus reducing the time involved with DANE transactions. Given the key role DNS plays in the Internet in general, you can also expect DNS to appear in other groups throughout the week.

P.S. For more information about DNSSEC and DANE and how you can get them deployed for your networks and domains, please see our Deploy360 site:

Relevant Working Groups at IETF 101:

DNSOP (DNS Operations) WG
Tuesday, 20 March 2018, 15:50-18:30 UTC, Sandringham
Thursday, 22 March 2018, 18:10-19:10 UTC, Sandringham

Agenda: https://datatracker.ietf.org/meeting/101/agenda/dnsop/
Documents: https://datatracker.ietf.org/wg/dnsop/
Charter: http://tools.ietf.org/wg/dnsop/charters/

DPRIVE (DNS PRIVate Exchange) WG
Wednesday, 21 March 2018, 13:30-15:00 UTC, Balmoral
Agenda: https://datatracker.ietf.org/meeting/101/agenda/dprive/
Documents: https://datatracker.ietf.org/wg/dprive/
Charter: http://tools.ietf.org/wg/dprive/charters/

DNSSD (Extensions for Scalable DNS Service Discovery) WG
Thursday, 22 March 2018, 9:30-12:00 UTC, Buckingham
Agenda: https://datatracker.ietf.org/meeting/101/agenda/dnssd/
Documents: https://datatracker.ietf.org/wg/dnssd/
Charter: http://tools.ietf.org/wg/dnssd/charters/

DOH (DNS over HTTPS) WG
Thursday, 22 March 2018, 13:30-15:30 UTC, Blenheim
Agenda: https://datatracker.ietf.org/meeting/101/agenda/doh/
Documents: https://datatracker.ietf.org/wg/doh/
Charter: http://tools.ietf.org/wg/doh/charters/

Follow Us

It will be a busy week in London, and whether you plan to be there or join remotely, there’s much to monitor. Read the full series of Rough Guide to IETF 101 posts, and follow us on the Internet Society blogTwitter, or Facebook using #IETF101 to keep up with the latest news.

The post Rough Guide to IETF 101: DNSSEC, DANE, DNS Security and Privacy appeared first on Internet Society.

Watch Live – IETF 100 Plenary Panel on the Future of the Internet

What is this future of the Internet? What will the Internet look like in 30 years? On Wednesday, November 15, three prominent strategists will gaze into the future and share their unique perspectives.  This panel on “The Internet, a look forward: Social, political, and technical perspectives” is part of the IETF 100 plenary session streaming live out of Singapore. The plenary session will also include the presentation of the Jonathan B. Postel Service award.

You can watch live at:    https://www.ietf.org/live

The entire IETF 100 plenary session is from 17:10 – 19:40 Singapore time. This is UTC+8, which translates into:

  • 10:10 – 12:40 Central European Time
  • 9:10 – 11:40 UTC
  • 4:10 – 6:40 US Eastern time

IMPORTANT NOTE – The panel and the Postel Award presentation are just two sections of the IETF 100 plenary session – and happen somewhere in the middle of the session. The full agenda can be found at:  https://datatracker.ietf.org/meeting/100/materials/agenda-100-ietf-sessa/

The live video stream will be recorded if you want to watch later.

Moderated by Brian Trammell, member of the Internet Architecture Board, panelists include:

  • Monique Morrow, President and Co-Founder of the Humanized Internet, a non-profit organization focused on providing digital identity for those individuals most under-served
  • Jun Murai, Founder of WIDE Project and Professor at  Keio University with a research focus in global computer networking and communication, and known as the “Father of Japan’s Internet” or “Internet Samurai”
  • Henning Schulzrinne, Professor in the Department of Electrical Engineering and chair of the Department of Computer Science at Columbia University, New York

Join in to hear the panel’s perspectives and the discussion.

When you are done, you may wish to explore our Internet Society 2017 Global Internet Report: Paths to our Digital Future, where we provide an analysis and perspective on different paths we see for the future of the Internet.

This discussion about the future of the Internet – happening at IETF 100, happening online, and happening in many other venues – is critical. There are many paths the Internet could take – but only some of them will benefit all of humanity.

It is up to each one of us to help shape the Internet of tomorrow.


Image credit: Michal Lomza on Unsplash

The post Watch Live – IETF 100 Plenary Panel on the Future of the Internet appeared first on Internet Society.

How To Survive A DNS DDoS Attack – Consider using multiple DNS providers

How can your company continue to make its website and Internet services available during a massive distributed denial-of-service (DDoS) attack against a DNS hosting provider? In light of last Friday’s attack on Dyn’s DNS infrastructure, many people are asking this question.

One potential solution is to look at using multiple DNS providers for hosting your DNS records. The challenge with Friday’s attack was that so many of the affected companies – Twitter, Github, Spotify, Etsy, SoundCloud and many more – were using ONLY one provider for DNS services. When that DNS provider, Dyn, then came under attack, people couldn’t get to the servers running those services.&#160; It was a single point of failure. &#160;

You can see this yourself right now. If you go to a command line on a Mac or Linux system and type “dig ns twitter.com,”[1] the answer you will see is something like:

twitter.com.	10345  IN  NS   ns4.p34.dynect.net.
twitter.com.	10345  IN  NS   ns3.p34.dynect.net.
twitter.com.	10345  IN  NS   ns1.p34.dynect.net.
twitter.com.	10345  IN  NS   ns2.p34.dynect.net.

What this says is that Twitter is using only Dyn. (“dynect.net” is the domain name of Dyn’s “DynECT” managed DNS service.)

Companies using Dyn who also used another DNS provider, though, had less of an issue. Users may have experienced delays in initially connecting to the services, but they were still able to eventually connect.&#160; Here is what Etsy’s DNS looks like after Friday (via “dig ns etsy.com”):

etsy.com.	9371  IN  NS   ns1.p28.dynect.net.
etsy.com.	9371  IN  NS   ns-870.awsdns-44.net.
etsy.com.	9371  IN  NS   ns-1709.awsdns-21.co.uk.
etsy.com.	9371  IN  NS   ns3.p28.dynect.net.
etsy.com.	9371  IN  NS   ns-1264.awsdns-30.org.
etsy.com.	9371  IN  NS   ns-162.awsdns-20.com.
etsy.com.	9371  IN  NS   ns4.p28.dynect.net.
etsy.com.	9371  IN  NS   ns2.p28.dynect.net.

Etsy is now using a combination of Dyn’s DynECT DNS services and Amazon’s Route 53 DNS services.

But wait, you say… shouldn’t this be “DNS 101”?

Aren’t you always supposed to have DNS servers spread out across the world?
Why don’t they have “secondary DNS servers”?
Isn’t that a common best practice?

Well, all of these companies did have secondary servers, and their DNS servers were spread out all around the world. This is why users in Asia, for instance, were able to get to Twitter and other sites while users in the USA and Europe were not able to do so.

So what happened?&#160;

It gets a bit complicated.

20 Years Ago…

Jumping back, say, 20 years or so, it was common for everyone to operate their own “authoritative servers” in DNS that would serve out their DNS records. A huge strength of DNS that it is “distributed and de-centralized” and anyone registering a domain name is able to operate their own “authoritative servers” and publish all of their own DNS records.&#160;

To make this work, you publish “name server” (“NS”) records for each of your domain names that list which DNS servers are “authoritative” for your domain. These are the servers that can answer back with the DNS records that people need to reach your servers and services.&#160;

You need to have at least one authoritative server that would give out your DNS records. Of course, in those early days if there was a problem with that server and it went offline, people would not be able to get the DNS records that would get them to your other computers and services.&#160; Similarly you could have a problem with your connection to the Internet and people could not get to your authoritative server.

For that reason the best practice emerged of having a “secondary” authoritative DNS server that contained a copy of all of the DNS records for your domain. The idea was to have this in a different geographic location and on a different network.

On the user end, we use what is called a “recursive DNS resolver” to send out DNS queries and get back the IP addresses that our computers need to connect. Our DNS resolvers will get the list of name servers (“NS records”) and choose one to connect to. If an answer doesn’t come back after some short period of time, the resolver will try the next NS record, and the next… until it runs out of NS records to try.&#160;

Back in July 1997, the IETF published RFC 2821 dedicated to this topic: Selection and Operation of Secondary DNS Servers. It’s fun to go back and read through that document almost 20 years later as a great bit has changed. But back in the day, this was a common practice:

&#160;The best approach is usually to find an organisation of similar size, and agree to swap secondary zones – each organization agrees to provide a server to act as a secondary server for the other organisation’s zones.&#160;

As noted in RFC 2821, it was common for people to have 2, 3, 4 or even more authoritative servers. One would be the “primary” or master server where changes were made – the others would all be “secondary” servers grabbing copies of the DNS records from the primary server.

Over the years, companies and organizations would spend a great amount of time, energy and money building out their own DNS server infrastructure.&#160; Having this kind of geographic and network resilience was critical to ensure that users and customers could get the DNS records that would get them to the organizations servers and services.

The Emergence of DNS Hosting Providers

But most people really didn’t want to run their own global infrastructure of DNS servers. They didn’t want to deal with all the headaches of establishing secondary DNS servers and all of that. It was costly and complicated – and just more than most companies wanted to deal with.&#160;

Over time companies emerged that were called “DNS hosting providers” or “DNS providers” who would take care of all of that for you. You simply signed up and delegated operation of your domain name to them – and they did everything else.&#160;

The advantages were – and are today – enormous. Instead of only a couple of secondary DNS servers, you could have tens or even hundreds. &#160;Technologies such as anycast made this possible. The DNS hosting provider would take care of all the data center operation, the geographic diversity, the network diversity… everything.&#160; And they provided you with all this capability on a global and network scale that very few companies could provide all by themselves.&#160;

The DNS hosting providers gave you everything in the RFC 2821 best practices – and so much more!

And so over the past 10 years most companies and people moved to using DNS hosting providers of some form. Often individuals simply use the DNS hosting provided by whatever domain name registrar they use to register their domain name.&#160; Companies have outsourced their DNS hosting to companies such as Dyn, Amazon’s Route 53, CloudFlare, Google’s Cloud DNS, UltraDNS, Verisign and so many more.&#160;

It’s simple and easy … and probably 99.99% of the time it has “just worked”.

And you only needed one DNS provider because they were giving you all the necessary secondary DNS services and diversity protection.

Friday’s Attack

Until Friday. When for some parts of the Internet the DNS hosting services of Dyn didn’t work.&#160;

It’s important to note that Dyn’s overall DNS network still worked. They never lost all their data centers to the attack. People in some parts of the world, such as Asia, continued to be able to get DNS records and connect to all the affected services without any issues.

But on Friday, all the many companies and services that were using Dyn as their only DNS provider suddenly found that a substantial part of the Internet’s user community couldn’t get to their sites. They found that they were sharing the same fate as their DNS provider in a way that would not have been true before the large degree of centralization with DNS hosting providers.

Some companies, like Twitter, stayed with Dyn through the entire process and weathered the storm. Others, like Github, chose to migrate their DNS hosting to another provider.&#160; Still others chose to start using multiple DNS providers.&#160;

Why Doesn’t Everyone Just Use Multiple DNS Providers?&#160;

This would seem the logical question.&#160; But think about that for a second – each of these major DNS providers already has a global, distributed DNS architecture that goes far beyond what companies could provide in the past.

Now we want to ask companies to use multiple of these large-scale DNS providers?

I put this question out in a number of social networks and a friend of mine whose company was affected nailed the issue with this comment:

Because one DNS provider, with over a dozen points-of-presence (POPs) all over the world and anycast, had been sufficient, up until this unprecedented DDoS. We had eight years of 100% availability from Dyn until Friday. Dealing with multiple vendors (and paying for it) didn’t have very good ROI (and I’m still not sure it does, but we’ll do it anyway).&#160;

Others chimed in and I can summarize the answers as:

  • CDNs and GLBs – Most websites no longer sit on a single web server publishing a simple set of HTML files. They are large complex beasts pulling in data from many different servers and sites. And they very often sit behind content delivery networks (CDNs) that cache website content and make it available through “local” servers or global load balancers (GLBs) that redirect visitors to different servers. Most of these CDNs and GLBs work by using DNS to redirect people to the “closest” server (chosen by some algorithm). When using a CDN or GLB, you typically wind up having to use only that service for your DNS hosting.&#160; I’ve found myself in this situation with a few of my own sites where I use a CDN.
  • Features – Many companies use more sophisticated features of DNS hosting providers such as geographic redirection or other mechanisms to manage traffic. Getting multiple providers to modify DNS responses in exactly the same way can be difficult or impossible.
  • Complexity – Beyond CDNs and features, multiple DNS providers simply adds complexity into IT infrastructure. You need to ensure both providers are publishing the same information, and getting that information out to providers can be tricky in some complex networks.
  • Cost – The convenience of using a DNS hosting provider comes at a substantial financial cost. For the scale needed by major Internet services, the DNS providers aren’t cheap.&#160;

For all of these reasons and more, it’s not an easy decision for many sites to move to using multiple DNS providers.

It’s complicated.

And yet…&#160;

And yet the type of massive DDoS attacks we saw on Friday may require companies and organizations to rethink their “DNS strategy”. With the continued deployment of the Internet of Insecure Things, in particular, these type of DDoS attacks may become worse before the situation can improve. (Please read Olaf Kolkman’s post for ideas about how we move forward.) There will be more of these attacks.

As my friend wrote in further discussion:

&#160; These days you outsource DNS to a company that provides way more diversity than anyone could in the days before anycast, but the capacity of botnets is still greater than one of the biggest providers, and probably bigger than the top several providers combined.

&#160;And even more to the point:

&#160; The advantage of multiple providers on Friday wasn’t network diversity, it was target diversity.

The attackers targeted Dyn this time, so companies who use DNS services from Amazon, Google, Verisign or others were okay.&#160; Next time the target might be one of the others. Or perhaps attackers may target several.

The longer-term solutions, as Olaf writes about, involve better securing all the devices connected to the Internet to reduce the potential of IoT botnets. They involve the continued work collaboratively to reduce the effects of malware and bad routing info (ex. MANRS).&#160; They involve the continued and improved communication and coordination between network operators and so many others.

But in the meantime, I suspect many companies and organizations will be considering whether it makes sense to engage with multiple DNS providers.&#160; For many, they may be able to do so. Others may need the specialized capabilities of specific providers and find themselves unable to use multiple providers. Some may not find the return on investment warrants it. While others may accept that they must do this to ensure that their services are always available.

Sadly, taking DNS resilience to an even higher level may be what is required for today.

What do you think? Do you use multiple DNS providers?&#160; If so, what worked for you? If not, why not? I would be curious to hear from readers, either as comments here or out on social networks.

&#160;


[1] Windows users do not have the ‘dig’ command by default. Instead you can type “nslookup -type=NS &#60;domainname&#62;”. The results may look different that what is shown here, but will have similar information.

NOTE: I want to thank the people who replied to threads on this topic&#160;on Hacker News, in the /r/DNS subreddit and on social media. The comments definitely helped in expanding my own understanding of the complexities of the way DNS providers operate today.

Image credit: a photo I took of a friend’s T-shirt at a conference.

The post How To Survive A DNS DDoS Attack – Consider using multiple DNS providers appeared first on Internet Society.

ISOC@OECD, Day 3: Walid Al-Saqaf on Blockchain; IETF Chair Jari Arkko on Network Convergence

It’s the final day of the OECD Ministerial Meeting on the Digital Economy here in Cancun, Mexico, and there are just two more sessions blocks followed by the Closing Ceremony. Here below is where our attention will be focused today – and to understand the broader questions around why we are here, please read our OECD Ministerial Background Paper (All times are local to Cancun – UTC-5.)

You can also view the OECD Ministerial Agenda for a full list of sessions and participants.

9:00-10:45 – Improving Networks and Services through Convergence

In the first session on “Improving Networks and Services through Convergence“, Internet Engineering Task Force (IETF) Chair Jari Arkko is one of the speakers in a session about the convergence of telecommunications and Internet services. The panel is moderated by U.S. Ambassador Daniel Sepulveda and includes communications ministers, regulators, the CEO of AT&T Mexico and a VP from Facebook.  It should be an interesting session given this tension between the older world of telecom and the newer world of the Internet.

Simultaneously, the other active session will be “New Markets and New Jobs in the Digital Economy” and it includes another ITAC organization, the IEEE, represented by their Managing Director, Konstantinos Karachalios.

11:15-13:00 – Skills for a Digital World

In the final session block, Internet Society Board of Trustee Member Walid Al-Saqaf will be a “key intervener” in the panel “Skills for a Digital World“. As Walid notes in a blog post published today, he intends to ask the panel about what policy makers are doing to stay up-to-date on blockchain technology. (Process note: a “key intervener” is a participant who is designated before the event to ask a question of the panel.)

At the same time, the session in the room next door will be on “Tomorrow’s Internet of Things” and includes a wide range of ministers, executives and others. (We would naturally hope that people there will have read our Internet of Things Overview document that outlines some of the key challenges and opportunities we see with the IoT.)

After that, there will be lunch, the Closing Ceremony and the final press conference… and we’re done!

For more information about what we have been doing here at the OECD Ministerial on the Digital Economy, please visit our event page. We will be adding links there to our articles, videos and more.

Throughout the day you can follow our @InternetSociety Twitter account where we will be providing updates using the #OECDdigitalMX hashtag.

Watch this blog, too, for a wrap-up post coming from Constance Bommelaer tomorrow.

Image credit: a photo I took of the “Official Photo of Ministers and Heads of Delegations”. Our Constance Bommelaer is standing at the front left edge. 

The post ISOC@OECD, Day 3: Walid Al-Saqaf on Blockchain; IETF Chair Jari Arkko on Network Convergence appeared first on Internet Society.