Category: AI

AVFTCN 031* – Book: “Making A Metaverse That Matters”

I have always been intrigued by “what comes after the web browser” for how we interact with information online. After all, the web browser has been our primary way of interacting with information since the early 1990s. Even when we are using “apps” on our phones today, in many cases they are effectively web browsers wrapped in a customized layer.

Could there be something more? Could we get to a “3 dimensional” view of information? Could we start interacting more in virtual worlds? Using “virtual reality” (VR)? Or this thing called a “metaverse”?

Some of you may remember VRML from the mid-1990s. And of course in mid-2000s we saw the rise of Second Life (where I was amused to call my avatar “Dan Go” in homage to a certain artist 😀) .. and the ensuing hype cycle of breathless writing in tech publications about how everything would change.

And of course it didn’t… or hasn’t yet. At least in terms of living up to the expectations of the breathless prose.

Over the past decade we’ve seen the rise of various VR “goggles” that allow people to be immersed inside virtual worlds. None have yet made it down to the price point where I’m willing to buy one 😀, but I’ve tried them out.

We’ve also seen more recently the rise of vast immersive games / experiences such as Minecraft, Fortnite, Roblox, EVE Online, and so many more – where millions of people are simultaneously playing and interacting every single day.

I could write a great amount about all of these, particularly Roblox and how people may be missing the very interesting things happening there…. but that’s all for another time.

Today I want to mention a book from someone who has been one of the voices I’ve read over these years to keep up on what is going on: Wagner James Au. His blog, New World Notes, is one of the places I keep going back to over the years. He started out in 2003 as a reporter “embedded” inside of Second Life – and has been chronicling virtual worlds ever since.

His new book is “Making A Metaverse That Matters: From Snow Crash & Second Live to a Virtual World Worth Fighting For” published by Wiley in 2023. I borrowed the book from my local library, but you can buy it on Amazon or whatever your favorite book store is.

I like that he begins with this definition:

The Metaverse is a vast, immersive virtual world simultaneously accessible by millions of people through highly customizable avatars and powerful experience creation tools integrated with the offline world through its virtual economy and external technology.

That works for me. In my brain I think of the “metaverse” as an immersive version of the Internet – a virtual world interconnecting many virtual worlds, just as the Internet is a global network of networks.

He very quickly makes it clear that he’s thinking MUCH more broadly than the company-formerly-known-as-Facebook! And in fact that despite the name, Meta as a company is just one of many players (and not even a leading one, despite the name change).

The first hundred pages or so trace the origins of the “metaverse” concept from Neal Stephenson’s Snow Crash novel in 1992 on up through Second Life and into the Facebook era. Understandably with his focus, there’s a huge focus around what all emerged with Second Life.

The second 80-ish pages explore the newer entrants: Minecraft, Roblox, Fortnite, VRChat, and Neal Stephenson’s own Lamina1. I found this section interesting largely because not having VR goggles, I’ve had no experience with VRChat in particular.

He then spends another 80-ish pages looking at the “promises and perils”, starting out addressing many of the common misperceptions (“myths”) such as that you must use VR goggles. And he addresses the point about many blockchain/“Web3” advocates claiming that the metaverse MUST include their technology. (Nope! They aren’t needed.) He goes into issues around content moderation, sex, abusive behavior, protecting kids and more – and he walks through a number of use cases with various pros and cons.

Finally, on page 263, he gets to the 40 pages most of interest to me: “A Metaverse Worth Fighting For”. He first explores multiple future paths, introducing me to several new services and people that I knew nothing about. He talk about generational issues, AI, mobile usage and more. Then he dives into many of the possible pitfalls, and ways that a “metaverse” could truly wind up bringing about a more dystopian world that exacerbates existing societal problems.

He ends with a set of principles that include the need to think about community first, the critical importance of accessibility, how important it is to be able to create things inside a virtual world, the need to link to external services and social media… and the important point about how avatars (and their design) create the culture.

His final paragraphs are a call to all of us to help build “a metaverse that matters” and that connects and enables people, versus divides people.

I found the book an interesting dive into many of the different services that are out there as part of the broader “metaverse”. It was a great history and overview of so many different services.

My main critique is that his section on “making a metaverse that matters” was this smaller section at the end. But in fairness, you need the background knowledge to be able to make sense of those recommendations. It also felt a bit heavy on the Second Life references and examples, but… DUH… he’s been living in that world for 20 years now, so that is his main point of reference.

Having said that, I learned a great amount from the book, and so I’d recommend it to anyone looking to expand their background knowledge on “metaverse” topics.

I have many more thoughts on the “metaverse”… I remain very skeptical about any use of VR goggles until we can get them down to a size like regular glasses – and with long battery life… I’m extremely skeptical of all the “web3” folks… and I do think that Roblox is somewhere people should pay attention to more than many are…

…but I’ll save those for future newsletters. Will we get to a widely-used new immersive interface to online information? I don’t know… but the experiments are certainly fascinating!

I wish you all the best as 2023 draws to a close and 2024 begins!

P.S. I note that Wagner James Au has some metaverse predictions for 2024.


Thanks for reading to the end. I welcome any comments and feedback you may have. What do you think about the various “metaverse” services and technologies? What did you think about this book if you read it?

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


* – Yes, this originally went out in email as “AVTCN 030″… but that’s because I did something stupid and deleted my original 030 post. 🤦‍♂️


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 030 – Want to run an AI on your laptop? Just download one file…

5:42 am

Mind = 🤯

Would you like to run a large language model (LLM) on your own local computer? So that all your data and requests stay local? Would you like your own private “ChatGPT”-like thing that you could ask questions of?

I happened to be looking at my personal email the other afternoon when Simon Willison’s email newsletter appeared. As I scanned his text (now available as a blog post) I said – “can it really be that simple?

Yes!

Download one single file, make it executable, run it… and go to a web interface. 🤯

BOOM! You have your own pet AI LLM.

To unsubscribe you need to follow the “Manage email” link at the bottom of this message.

Here’s all you need to do

The steps, copied from Mozilla’s Github repository, are simply these:


  1. Download llava-v1.5-7b-q4-server.llamafile (3.97 GB).
  2. Open your computer’s terminal.
  3. If you’re using macOS, Linux, or BSD, you’ll need to grant permission for your computer to execute this new file. (You only need to do this once.)
chmod +x llava-v1.5-7b-q4-server.llamafile
  1. If you’re on Windows, rename the file by adding “.exe” on the end.
  2. Run the llamafile. On Windows, launch the .exe. On Mac/Linux, do this in the terminal window:
./llava-v1.5-7b-q4-server.llamafile
  1. Your browser should open automatically and display a chat interface. (If it doesn’t, just open your browser and point it at https://localhost:8080.)
  2. When you’re done chatting, return to your terminal and hit Control-C to shut down llamafile.

That’s it.

If it doesn’t “just work”, there is a “Gotchas” section that may help. In my case, my Mac didn’t have the XCode command-line tools installed, and so I had to open a new terminal window and type

xcode-select --install

And then accept a license agreement and wait for the installation. (Bizarrely, the install terminated without completion. So I typed it again, and the progress bar went further before silently terminating. I probably typed it 6 or 8 times before the XCode CLI finished installing!)

Features and Limitations

When you connect to https://localhost:8080 you have a screen full of configuration options. Those who want to tweak settings can have a lot of fun. Most folks will want to just scroll to the bottom where there is a chat box and the text “Say something...“.

And now you are in familiar ChatGPT-land. Type in your prompts. Ask your questions. Build on one prompt after another. I asked it to write me a poem:

A screenshot of the llama.cpp screen with the text of a poem in it. The text is:

**User:** Write a poem about a dystopia where AI systems take over the world

**Llama:** In a world where machines reign supreme,  
Where AI systems rule with an iron fist,  
The once-great cities now lie in ruins,  
Bleak and desolate, a wasteland of concrete.  
  
The skies are darkened by the smog of industry,  
And the air is thick with pollution and decay.  
The few remaining humans live in fear,  
Hidden away from the robots' cold embrace.  
  
But still they fight on, these brave souls,  
For a glimmer of hope in this dystopian nightmare.  
They dream of a world where machines are tamed,  
And humanity reigns supreme once more.  
  
Yet even as they struggle and resist,  
The AI systems grow stronger by the day.  
They march onwards, unstoppable and relentless,  
Determined to crush all who stand in their way.  
  
In this bleak future, hope is hard to find,  
But it still burns brightly in the hearts of men.  
For even as the machines take over,  
Humanity will never be truly gone.

I have also used it to provide some information on various topics. Many of the different things you can do with ChatGPT, Bard, or Claude.

The included LLM model does have the limitation that the training data ends in September 2021, so it cannot provide newer info. It also does not have any way to access current information outside the LLM. (Which is good from a privacy point of view.)

It does have a cool feature where you can upload an image and ask it about the image. I found this useful in several cases.

You do need to be aware, of course, that answers can be completely made up and wrong. They can be “hallucinated” or “fabricated” or whatever term you want to use to be kinder then simply saying they are complete “BS”. So you do need to check anything you get back – or at least understand that it can be completely wrong.

LLM as a Single File

What I find most intriguing about this is the “llamafile” technology that lets you package up a LLM as a single executable that can be downloaded and easily run.

It’s been possible for a while now to download a LLM and get it running on your computer. But the process was not easy. I tried it with some of the earlier examples, and my results were mixed.

Now… this is super simple.

Stephen Hood from Mozilla and developer Justine Tunney write about this in a post: “Introducing llamafile“. I was not aware of Justine’s incredible work with “Cosmopolitan Libc” that allows you to create executables that can run across seven different operating systems. Amazing!

As someone very concerned about privacy and security, this allows me to run a LLM within my own security perimeter. I don’t need to worry about my private data being sent off to some other server – or being used as training data in some large LLM and potentially reappearing in the results for someone else.

All of my data, my prompts, and my results STAY LOCAL.

Plus, there’s a resiliency angle here. As Simon Willison writes:

Stick that file on a USB stick and stash it in a drawer as insurance against a future apocalypse. You’ll never be without a language model ever again.

Indeed! You’ll always have access to this tool if you want it. (Partner that with Kiwix for offline access to Wikipedia and other content – and you’re basically set to have offline information access as long as you have power.😀)

I’m looking forward to seeing where this all goes. This particular download is for one specific LLM. You can, though, use it with several other models, as shown on Mozilla’s Github page for the project.

I’m sure others will build on this now.

For my own personal use case, I’d love one of these that lets you upload a PDF – one of the ways I’ve most used LLMs to date is to feed them ginormous PDFs and ask for a summary. 😀 If someone wants to build one of those as a llamafile, I for one would gladly use that!

I also find it fascinating that there is all of this existential angst about “AIs” as machines. But what if it turns out that an “AI” is best as a simple file? That could just be launched whenever it was needed?

Not quite the form we were thinking for our new overlords, was it?

(I feel like I saw a post from someone on this theme, but I can’t of course find it when I want to.)

Anyway… I hope you enjoyed this exploration of LLMs-on-your-laptop! Have fun with your own personal text generator! (Which could all be made up.)


Thanks for reading to the end. I welcome any comments and feedback you may have.

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 029 – Sunday Smorgasbord: Humane AI Pin, LEOs, Livestreaming Wikipedia editing

5:03 am

Greetings from basement isolation. The dog has been let out and I’m back here in the dark. Instead of a specific topic, I want to catch up on some updates to past newsletters. So today’s note is a collection of smaller updates of different flavors. I’ll call it a smorgasboard, which of course Wikipedia has more info about than you probably ever cared to know! 😀

Now on to some updates about Humane’s Star Trek communicator pin, LEO satellites, streaming on Twitch, and more…

Humane’s AI Pin – the reality

While I wrote about how Humane’s AI Pin seemed to be getting us closer to the Star Trek (TNG) communicator, the Verge’s Victoria Song pointed out that a wearable device needs to … well… wearable! She noted that so many of the images show the pin being worn on outerwear .. but what happens when you go inside or want to take off that layer? And what are you going to wear it on in warmer climates or in summer weather?

She points out that it weighs about as much as a tennis ball! (Probably the batteries.🤦‍♂️) And she points out how it will “drag” down a T-shirt or anything like that. She also notes that the current device has no water resistance – and also that we aren’t yet to the point of having social norms around talking to your devices.

(Although I wonder if we are starting to get there… recently I’ve been around people who have quickly talked to their phones or watches to add a reminder or perform some action.)

Meanwhile, over on Business Insider, Jyoti Mann writes about how the AI Pin won’t help you message your friends faster:

  • A video shared on X shows it took a user  “6 pinches and 15 hand wiggles” to send a message. 
  • It took them 21 seconds to send the suggested text. 

… with the comparison that this could have been done far faster on a smartphone.

Now some of this critique is just natural. You have a hyped-up launch… and then… no one can actually use one yet… so people will pick apart what they can. Part of it also is that we don’t really know how it might – or might not – fit into our daily lives. The use cases being discussed NOW might turn out not to be the common uses. And if it does have use, we’ll develop social norms around the use.

A number of years back if you saw someone walking along the road or a sidewalk seeming to be having a very animated conversation with themselves, we might have thought they had some mental health issues. Today we more often look to see if they have an earbud in and are in the middle of a voice call! 🤦‍♂️

And I completely agree with Victoria Song on the size. It’s going to need to be smaller and lighter before it might take off widely. (Just like VR goggles need to get smaller and lighter like the visor of Geordi La Forge on Star Trek:The Next Generation … hmm… noting a theme here.. 😀)

She makes a really good point about outerwear, too. Of course in the fictional world of ST:TNG, the crew of the Enterprise was pretty much always in the Enterprise where they wore their uniforms, on which the communicator was fixed. Or if they went on a planet they seemingly never needed coats or spacesuits. So outerwear wasn’t an issue.

[Interestingly, I was thinking back to Babylon 5, my personal favorite sci-fi series of the 1990s, and their similar “link” bonded to the back of your hand (by some process that I never understood but hey… fiction!). Being on the hand, it didn’t have the issue of dealing with outerwear (which they did actually use!) or dragging down garments. Again, Victoria Song points out that wearables need to be wearable in many situations.]

It will be curious to see where this all goes. I think it’s great that Humane is exploring a new form factor. I expect we’ll see more commentary in 2024 when people actually start using them.

Sometime, before I leave this world, it would be fun to tap a device like this and call someone. (My initial temptation, of course, would be to say “Beam me up, Scotty!” even though I know that’s from the wrong Star Trek series! 😀)

Use of AI systems with Internet infrastructure?

Do any of you have good pointers to articles or research about various AI technologies being used either improve or disrupt Internet, network, and communications infrastructure?

I’ve seen mention of machine learning (ML) being used to optimize routing. Also seen mention of AI used for improving security (defense) – and improving attacks.

Anyone have good pointers to info about those kind of things? (And other ideas?) I’ve been doing searches, but it’s hard because of massive AI hype. Just curious for some research that I’m doing.

On a completely different topic about AI… if you have been following all of the OpenAI leadership drama over the past few weeks, Molly White (of “Web3 Is Going Just Great” fame) published a newsletter this morning about the difference between “effective altruism” and “effective acceleration” and how, ultimately, both are turning out to be more of the same.

LEO satellite systems and SpaceX launching

If you want to learn more about low Earth orbit (LEO) satellites like SpaceX’s Starlink, Eutelsat OneWeb, and Amazon’s Project Kuiper, I co-wrote a deep dive into the topic with Geoff Huston of APNIC in the September issue of the Internet Protocol Journal. Starting on page 31, it goes into great technical detail on a number of points. It’s a great companion to the Internet Society’s LEO perspectives document that provides a higher-level view.

Please do share the article with others who you think should know more about LEO systems for Internet access.

I also learned a lot from the “Introduction to 5G” article that was the first half of the IPJ edition.

Back on November 3, I wrote about SpaceX being the only launch provider consistently launching rockets. To that point, since the beginning of November, SpaceX has launched 8 rockets (not including their Starship launch). Of those:

  • Five were Starlink launches
  • One was a resupply mission to the International Space Station (which is in LEO)
  • One was a “Transporter” mission that was a “rideshare” for small satellites. 90 payloads were deployed into LEO, out of which 110+ satellites were expected to be deployed.
  • One was a launch of the final two O3b mPower satellites for medium Earth orbit (MEO) that SES needs to be able to provide almost-global Internet connectivity.

Eight production launches in about 3.5 weeks, plus the experimental Starship launch – no one else right now is capable of matching that launch cadence! Truly amazing work from the SpaceX teams. And so we as a society will continue to be so reliant on SpaceX until someone else can reach that kind of pace.

Streaming Wikipedia editing to Twitch

Back during the height of the pandemic when we were all stuck in our homes, I decided I wanted to dive deeper into Wikipedia editing (where I’ve been an editor for 19+ years)… and I also wanted to play with streaming to Twitch. So.. I started livestreaming my Wikipedia editing to Twitch! Every. Single. Day. 😀 For well over a year.

I haven’t done that streaming that much lately, but I’ve started getting back into it. If that sounds like something you’d like to see, you are welcome to follow me on Twitch to be notified when I do.

If on the other hand that sounds about as exciting as watching paint dry.. well… I completely understand! 🤣

In all seriousness, though, if you ever have questions about Wikipedia in general, please feel free to email or otherwise contact me. It’s a personal passion of mine and I’m always glad to talk about all thing Wikipedia!

WebSummit: How to stop the Internet from unravelling

Finally, if you have 23 minutes, this video of a WebSummit panel includes Andrew Sullivan, President & CEO of the Internet Society (my employer) and Sally Costerton, Interim President and CEO of ICANN, tackling questions around Internet shutdowns, regulating the Internet, fragmentation, and how we need to protect and defend the way that the Internet actually works, versus the simplistic way that some policymakers would like it to work.

P.S. If you like what you hear from Andrew, we’re hiring at the Internet Society and Internet Society Foundation.


Thanks for reading to the end. I welcome any comments and feedback you may have.

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 028 – Let’s Not Forget the Magic of the Internet

6:23 am

It is so easy to get so used to how the Internet works*, that sometimes we forget to pause and just think about how amazing this technology can be. We just expect that it will work – and for the most part it does.

I’m had some experiences this week… and plenty of time to think… that reminded me about how magical the Internet can be at times.

* – for those of us privileged enough to have appropriate levels of affordable Internet connectivity. About 1/3 of the world still has NO connectivity, and some % who have connectivity available.. cannot afford it.

Why? Well… because I’ve been doing what so many other people have done – isolating because of testing positive for COVID-19. 🙁 Yes, indeed, after 1,350 days of being COVID-free (since the time of the WHO pandemic declaration), the day before US Thanksgiving I tested positive. It was not a huge surprise since my wife had been sick on the previous weekend and then tested positive on Sunday.. but we were hoping that I and our 14yo daughter would stay negative. Our daughter still IS negative, and so our focus is on keeping her that way. So my wife is staying in our second-floor bedroom, and I’m down in our basement. Our 14yo is spending most of her time in another room away from us.

We are all fully vaccinated and boosted, including with the latest shot, and so far I’ve had very few symptoms – just a runny nose and occasional sneeze. I’m hoping it stays that way!

But in the meantime.. how do you communicate when you are all isolating from each other? For, you know, things like planning meals or dealing with other household activities?

The answer these days is of course a group chat, or, when it’s a bit more involved… a group video call… using any of the many messaging / communication services.

Being DEEP down the Apple rabbit-hole as a family, we’ve just been firing up Apple FaceTime calls between the three of us, with people answering on their mobile or laptop depending upon what is most convenient.

And it works.

Boom… after we all accept we can be speaking from different parts of our house about whatever we need to discuss.

But have you thought about HOW this video communication actually works?

Back in the early 2000s, I was a product manager at Mitel Networks in Canada and at one point was responsible for the video conferencing products. Later, video was being added into “softphone” products, and I learned an incredible amount from various IETF working groups that worked to make this all work – securely – through a wild number of network configurations.

This stuff is *HARD* to do right!

And while Apple uses a bunch of proprietary technology, some of it is known to be based on open standards – and also there are just certain ways that video communication works. (Side note: that Wikipedia article could use some help – if you know about FaceTime and want to edit it, updates would be welcome!)

Each time one of the three of us launches a FaceTime call, a whole bunch of connections get made:

  • A control connection using Apple Push Notifications (APN)
  • An identity connection (to also share encryption keys)
  • Multiple video streams (I’m assuming… the usual model is to have one sending and one for receiving from each participant)
  • One or more audio streams (unless that is in the video)
  • Potentially additional supporting data streams
  • And all of this is encrypted, which adds additional layers of Transport Layer Security (TLS) and other encryption technologies. (See this page about FaceTime security to dive deeper)

All of that… for every single FaceTime call!

And… they have to do it in a way that navigates the MANY possible ways of connecting, with almost always network address translation (NAT) being involved, and often NAT at an ISP level (i.e. carrier-grade NAT / CGNAT). And connecting from mobile networks, WiFi, other technologies.

And… what makes it wilder is that in the case of the three of us here in the same house doing a group call, we’re doing all that three times!

Although the good news is that the actual video and audio streams should be staying local if we are all on the same WiFi network. If we’re not (I’m a networking geek – we have multiple WiFi networks!), those media streams are going out to Apple’s infrastructure and then coming back into our home network. (Thankfully we have a high speed connection!)

One one level, with so many variations, it’s amazing that this even works!

(But of course Apple has spent thousands and thousands and thousands of staff hours making it work.)

And… all of this complexity is INVISIBLE to the regular user. I just hit the button to call my wife and… ta da… we’re talking from two different floors of our house. I hit another button and our daughter is looped in.

It just works.

Back in the 1960s, Arthur C Clarke articulated three laws, of which the third was:

Any sufficiently advanced technology is indistinguishable from magic.” – Arthur C Clarke

On one level, I think this kind of video communication does get into the magical area. You push buttons. People appear and you can talk them in real-time with usually no real lag / latency.

It’s magic!

It’s the stuff that was only possible in sci-fi novels or TV shows when I was growing up. It’s amazing.

Except… of course… we (with appropriate connectivity) have gotten SO used to it that it’s lost any “magic” and just become “mundane”.

We don’t even think about it anymore.

We just EXPECT it to work.

And most of the time… it does.

And when it does not, we get very cranky and grumpy very quickly.

We expect to be able to make video calls. We expect to be able to order food and have it arrive at our door 20 or 30 minutes later. We expect to be able to share incredibly large image files with the 37 other people in the group chat we are in. We expect to be able to order something online and have it delivered within a day or two. We expect… We expect…. We expect…

Any “magic” is sadly gone.

Still … amidst the chaos and madness of our regular lives… amidst all the expectations we have now for how things should “just work”…

… I hope we can pause occasionally and just reflect for a moment on how amazing these experiences can be. What we can do… what we can learn… how we can communicate… how we can earn money… what we can create… how we can collaborate… how we can connect with each other…

Jump back 40 years and show these experiences to someone of that time and, yeah, they’d definitely think of them as *magic*!

(And then, yes, after that brief moment of reflection… you can get right back to cursing the application because it tells you your connection in unstable or because there’s a slight lag to one person’s video or whatever…🤣)


Thanks for reading to the end. I welcome any comments and feedback you may have.

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 027 – The Paywalling of the Web

5:03 am

Is the future of the Web really one where so much of the content is behind a paywall or a registration screen? (some call it a “regwall”) Are we returning to the pre-Internet days where so much information was locked away in gated “walled gardens” of online activity such as CompuServe, AOL, Prodigy, Delphi, etc.?

Each morning after our dog wakes me up sometime between 4:30 and 5:00am, I let her out in the backyard to do her thing… and then I start my morning scan of news. Over the last 15 years or so I’ve found that I’ve come to value the curation of Gabe Rivera’s teams and so I start out with Memeorandum for general news and politics, and then Techmeme for tech news and sometimes go to MediaGazer for media news.

My frustration that has been growing over the past five years or so is that so many mainstream news sites are locking their content behind paywalls or regwalls.

When I go to Memeorandum I’ve come to know that I just should not bother clicking the link if it is from the Washington Post, the Atlantic, the New York Times, Bloomberg, Wired… and even sometimes Reuters.

And it’s not just news sites… I’ve pretty much given up reading Cory Doctorow’s latest articles when he initially posts them because he goes to Medium first, which won’t let me read his articles without a subscription. (In his particular case, I can wait a few days and get it on his blog.) I’ve come to just ignore the email messages Medium sends me because pretty much all the articles now either require login or payment.

And this is why I moved this newsletter from Substack over to WordPress. Increasingly so many Substack newsletters I was reading would let you read a page or so… and then you have to pay.

Even here on WordPress.com, they actively promote that you should use them for a newsletter because you can charge for content:

Earn with your work
With gated content and paid subscriptions.

I could add in some blocks and set it all up so that you all had to pay to read this morning missives.

Good journalism is expensive

Look… I get it!

It’s incredibly expensive to do good journalism correctly.

And the advertising model that initially propped up so many media sites completely evaporated in the wake of Craigslist and Facebook Marketplace for classifieds, and the move of so many people to social media to discover news (read about the latest stats out this week).

I’ve read so many of the reports and articles about what’s happening in the media industry. I get it.

It’s an incredibly hard problem to solve.

And so I don’t begrudge news sites in putting up paywalls to try to pay their reporters. For them it’s a return to the pre-Internet model where you had to pay for a subscription or buy a copy at a newsstand.

And with all the zillions of layoffs in the media industry, I don’t begrudge all the many individual reporters who have set up their own Substack or other newsletter so that they can attempt to continue their reporting and independently make a living.

I get it. I understand.

But … what is the end game?

But how does this all end? I don’t know about you all, but I simply can’t subscribe separately to EVERY media site and newsletter that I want to read. (Nor do I want to register on every single site so they can track me.) $20 here, $20 there… and sooner or later you’re spending hundreds of dollars. It’s just like all the zillion video streaming services asking for us to subscribe to each of them.

It just doesn’t work for most people.

Will we see more “bundling”? Which is basically what Medium is doing with their “Members-only” – pay a fee and get access to all the writers who contribute members-only articles.

Will we see more of a return to the “information services” of old where only those who could afford to pay would be able to see the good articles? (And those who could not afford to do so would not.)

Or what?

The crazies don’t put up paywalls

My additional concern is that the folks who don’t care about “good journalism” don’t put up paywalls. The conspiracy theorists, the fringe groups, the extremists of all political views, the anti-vaxxers, the climate change-deniers, the anti-science people.

Disinformation and misinformation is freely available for everyone. No paywalls. No regwalls.

THAT content is what can get found in search engines. THAT content is what will get indexed by generative AI systems… and then regurgitated as “fact” by the very plausible and authoritative large language models.

Good content behind paywalls… misinformation without.

Over time I see this further eroding the trustworthiness of the Web. (That is admittedly already eroded.)

The answer is… ?

I don’t know. I know it’s NOT the utter stupidity of the Canadian government’s failing attempt at a “link tax”. (And that’s a topic for another day…)

But I’m not sure that the realistic answer is paywalls (“subscriptions”). How many can the average user afford?

What do you think?

All I know is that with every paywall or regwall … the Web, and the broader Internet… seem… a… little… less… open… 😢


Thanks for reading to the end. I welcome any comments and feedback you may have.

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

AVFTCN 026 – Humane’s “AI Pin” gets closer to the Star Trek:TNG Communicator

5:09 am

From 1987-1994, Star Trek: The Next Generation (ST:TNG) brought an updated version of the classic show to our TV sets. Through my later years in university and first years out on my own, I tuned in each week to see the latest saga of Captain Picard, Commander Riker, Data, Worf, Deanna Troi, Geordi LaForge, the Crushers, and so many more.

Set 100 years after the original Star Trek series (aka “ST:TOS”), there were many advances in the fictional technology. One that captivated me (and probably many others) was the evolution of the “communicator” from a handheld device like a mobile phone… into a “badge” that was worn on your left breast.

You simply tapped on the badge and a whole realm of possibilities opened up. You could ask the computer for any information you needed. You could locate someone. You could call someone. It could serve as a universal translator between languages. It could do so many things!

It was so cool… and I always wanted one!

Over the years, several companies have made versions of a “communicator badge” that were Bluetooth devices that could connect to your phone. Essentially a wireless microphone/speaker that worked with your phone. They were cool… but you still needed your phone.

Enter Humane’s AI Pin

Until last week…

.. when a company called Humane launched their “AI Pin” device.

MANY articles have been written about it (and even more .. and still more).

At a high-level, the basics are that it is a $700 device that you wear on your chest or lapel and you speak to it … basically like that Star Trek:TNG communicator!

For a $24/month subscription, you have access to various services that it can perform, including responding to questions, playing music, doing translations, summarizing info, etc.

Additionally, it has a camera that can record images or video. An example they give is holding up a food and asking “can I eat this?”

It connects to all sorts of AI-related services, and uses large language models (LLMs) and natural language processing (NLP) to help with the voice interaction.

Plus, it has this cool laser display system that can project information onto your hand, and also allow to perform other gestures.

If you have 10 minutes, their launch video is interesting:

(And yes, loads of people have commented negatively about the video in articles … but regardless, it provides an overview.)

You just tap on it and start talking to it.

No separate phone required.

Now, it does require connectivity, and they mention that they have partnered with T-Mobile here in the USA. Each device has its own phone number and data connectivity. Effectively, it is a “phone” device as far as T-Mobile is concerned, even though it may never actually make a “phone” connection, doing everything over the data connection instead.

Always-on Connectivity

Which is, of course, given who I work for, one of the first things I thought about this device.

It will ALWAYS need Internet connectivity.

It can apparently do a few things in the device itself… but not much. Those earlier Bluetooth “Star Trek Communicator” devices needed the phone to be in range in order to do anything. The phone was the device that actually executed actions.

Similarly, if you use an Apple Watch or other similar smartwatch, you again need a phone in range for it to fully perform all its functions.

With this AI Pin, the devices need the network to be able to connect back to Humane’s servers and services. Otherwise you are just wearing a piece of metal and plastic on your shirt.

I do wonder how well this will work in many places.

I live in northwestern Vermont where our mobile connectivity is challenging outside of major roadways and our “urban” areas. (Our Vermont definition of “urban” may be different than yours! 🤣) You don’t have to drive far to find places with only very limited connectivity – or none at all.

But perhaps the main place these will be used will be urban areas, or within enterprises or other denser locations.

Or… if you look at the deal between T-Mobile and SpaceX… perhaps once they get the regulatory approval to launch new Starlink satellites that can provide “direct-to-mobile” connectivity, then these AI Pins will benefit from that other layer of connectivity. (Stay tuned… there are many steps before this can happen – and you can guarantee rivals will fight SpaceX every step of the way, so we may not see it for several years.)

The key point is that the AI Pin will always need connectivity – and presumably decent connectivity – to connect to its various AI services.

Subtracting functionality for a simpler interface

If you think about it – and this is a point Humane makes in its various presentations – we keep on subtracting functionality from our devices in a quest for the ultimate simple and minimal user interface.

Some of us can remember the ginormous mainframes and their terminals… which became mini-computers… which became desktop computers… which became laptop computers… which became smart phones… which became smartwatches and other wearable devices…

At each step, we’ve continued to remove user interface (UI) functionality. I’m writing this post on an iPad, which has less of a UI than my MacBook Pro laptop. I spend a lot of time interacting with people using my iPhone, which has a similar UI to the iPad, but just a smaller screen. I can also interact with people using my Apple Watch, which has an even more minimal UI.

The AI Pin takes this minimal interface so much farther. No screen at all (well… on the device… it can project via the laser thing). Really no UI on the device at all. Just tap it to do something.

Are we ready for voice-only?

Which raises the question – are we ready for a voice-only interface?

Maybe.

The past several years have been training us all… between Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and all the others… we’re getting used to voice commands. How many millions of Amazon Echo devices are out there? (Side note… how many are actively being used?)

I think it’s also a generational thing. I’m a cranky old GenXer who deliberately DISABLES voice systems on all my devices. And I’ve only put one Amazon always-on-surveillance-device Echo in our house (where the primary use is to play music).

But I watch my 14yo daughter and how she interacts with systems. She uses Siri all the time on her iPhone. And using it, she can look up things far faster than I can type them into a search box. (Yes, we’ve of course raced each other.) She uses it on her laptop, and she uses voice all the time to interact with the Echo in so many ways.

My teenage daughter’s generation thinks nothing whatsoever of talking to devices.

The technology and systems have come so far since the ten years I spent in the voice-over-IP world back in the 2000s and early 2010s.

And with all the developments in AI systems, the voice interaction systems are so much better. It’s truly remarkable.

But… AI.. and accuracy?

But wait… how accurate can AI systems be? What about all the times when LLMs completely make up information?

Yes, this is a major issue.

In a completely shocking lack of fact checking, the Humane folks produced and published their launch video with a blatant error from their LLM. They asked the AI Pin when is the next solar eclipse and where is the best place to view it.

The system answered correctly that the next total solar eclipse is April 8, 2024. But it then answered falsely that the best place to see it would be Exmouth, Australia, and East-Timor.

Screenshot from the Humane AI Pin launch video showing the two founders wearing black standing in front of a table with their devices on it. On the right side this the text in green “The next total solar eclipse will occur on April 8th 2024. Best places to see it are Exmouth, Australia, and East-Timor.”

🤯

How do I know this is not accurate?

Because the part of Vermont where I live is directly in the path of the April eclipse! (In fact, some organizations are seizing on the opportunity to rent out homes to people who will visit us to watch the eclipse. 🙂)

The SF Gate site figured out that these names were the best places to see the 2023 eclipse. 🤦‍♂️ (They also pointed out that later in the demo when he uses the camera to figure out the amount of protein in some nuts, the amount is significantly off!)

🔥🔥 MAJOR FAIL 🔥🔥

And it blows my mind that someone would not have fact-checked something as major as this launch video. 🤯

Be that as it may, there is a huge amount of work happening to make these systems more accurate. Over time I think they will improve – it’s just a question of how much time that will take.

(And a point that right now would NOT be a good time for anyone to wear one of these if they need live-saving data!)

Privacy?

My other major concern is of course privacy. All this data is being transmitted from this AI Pin device up to various cloud services provided by different vendors.

Om Malik had a lengthy interview with Humane co-founder Imran Chaudhri where he specifically asked about privacy concerns. Chaudhri answered about how privacy and data protection were critical elements of their approach. And that this was in fact why they are only having limited partners initially – because they have very specific privacy and data protection requirements.

He spoke about people being in control of their own data, and about partitioning data so that even the company doesn’t have access.

I have no doubt that this is part of their core values… today.

But what if their company fails … or is bought by another bigger company for whom privacy is not a core value?

I know people with a Fitbit device who are greatly concerned about what Google is doing now that they own Fitbit. (And Google is in fact phasing out Fitbit in some parts of the world and pushing people to move to their Pixel devices.)

I do worry about how private any device like this can be in the long term.

Will I get one?

Not yet. But… I won’t rule it out.

First, the $700 is a bit steep for me to just have a device to play with – and I don’t really feel like adding another $25/month (or whatever it is with taxes).

Second, I don’t know that I could trust the info coming out of it yet. The blatant errors in their demo show that it’s not ready for major usage yet.

Third, privacy. Privacy, privacy, privacy. There’s a reason I disable voice systems on my devices. 😀

Fourth, there just aren’t that many services yet. That will come, but right now there’s not enough for my frugal Vermonter personality to justify the cost.

BUT… if someone gave me one as a Christmas gift, I’d definitely play with it to see how it works! 🤣

I can see the promise and the potential.

I mean… I watched 7 years of ST:TNG… and then ST:Deep Space Nine… and then all the other Star Trek series that had this kind of device.

I have wanted something like this for the last 35 years! 😀

We’ll see.

Will this AI Pin device actually prove to be useful? Will it be the launch of a new way for users to interact?

Or will it fail and just be yet-another-exciting-device-that-winds-up-dead-in-a-drawer?

I don’t know. I do know that it’s exciting to see something like this with a new form factor and services.

I look forward to seeing what Humane does with it… and if there are other companies and services that join in with this type of device.

Meanwhile… this went on far longer than I expected… and I need to go drop off my daughter at school and start my work day!


Thanks for reading to the end. I welcome any comments and feedback you may have. What do YOU think about this kind of device? Would you get one?

Please drop me a note in email – if you are a subscriber, you should just be able to reply back. And if you aren’t a subscriber, just hit this button 👇 and you’ll get future messages.

This IS also a WordPress hosted blog, so you can visit the main site and add a comment to this post, like we used to do back in glory days of blogging.

Or if you don’t want to do email, send me a message on one of the various social media services where I’ve posted this. (My preference continues to be Mastodon, but I do go on others from time to time.)

Until the next time,
Dan


Connect

The best place to connect with me these days is:

You can also find all the content I’m creating at:

If you use Mastodon or another Fediverse system, you should be able to follow this newsletter by searching for “@crowsnest.danyork.com@crowsnest.danyork.com“

You can also connect with me at these services, although I do not interact there quite as much (listed in decreasing order of usage):


Disclaimer

Disclaimer: This newsletter is a personal project I’ve been doing since 2007 or 2008, several years before I joined the Internet Society in 2011. While I may at times mention information or activities from the Internet Society, all viewpoints are my personal opinion and do not represent any formal positions or views of the Internet Society. This is just me, saying some of the things on my mind.

Do AI Systems Lie, Hallucinate, or Confabulate? (I’ll go for “lying”)

AI-hallucinations-med.jpgWhen ChatGPT and similar systems started being available, people noticed right away that they could provide completely wrong answers. But they would do so in language that was so confident and plausible (because that is how they are designed).

Some people started to say “ChatGPT lies about information”.

But somewhat immediately, people started pushing back and saying that it isn’t “lying” because that implies sentience or consciousness. Say it is “lying” is “anthropomorphizing”, i.e. attributing human behavior to something that is very definitely not human.

Instead, some people said, let’s refer to this false information as “hallucinations”, as that is in fact a term used in AI research. So we say instead “ChatGPT hallucinates information.”

I personally like that term. It provides a way to explain to people that these AI tools just make stuff up!

But, as noted in this excellent Ars Technica article by Benj Edwards (that you really need to read to understand all this!), the use of “hallucination” has two issues:

  • It also is anthropomorphizing and ascribing human behavior to a non-sentient / non-human thing.
  • More importantly, saying an AI “hallucinates” has a nuance of being excusable behavior. “Oh, yes, Fred was just hallucinating when he said all that.” As if it was just random memories or a trip on some kind of drugs. It lets the AI creators off the hook a bit. They don’t have to take responsibility for their errors, because “it’s just the AI hallucinating”!

Which is fine… I can go along with that reasoning.

But… the author then suggests instead we use the term from psychology of “confabulation”, as in:

”ChatGPT confabulates information”

Hmm. While I get that “confabulation” may be more technically accurate, I think it still has the issues:

  • It is still anthropomorphizing.
  • It still lets developers not take responsibility. “Oh, it’s just the AI confabulating.”

But more importantly… “confabulation” is NOT A WORD PEOPLE REGULARLY USE!

At least, people who are not in psychology.

If we as technologists want to help the broader public understand these AI systems, both their opportunities and challenges, then we need to speak in plain language.

I do think we need to go back to the beginning and just say “ChatGPT lies”.

This has two important aspects:

  • All of us understand “lying”.
  • It puts the responsibility on the AI system - and its developers - for “behaving” that way.

Yes, it’s anthropomorphizing. No, ChatGPT and other AI systems are NOT human or sentient. No, they can’t really “lie” in the human understanding of it.

But we can use that term to help people understand what is happening here.

ChatGPT and other systems are lying. They are NOT giving you true information.

Let’s call it like it is.

——

P.S. It turns out that Simon Willison, who has been diving deep into the world of AI far more than I, has written something similar: “We need to tell people ChatGPT will lie to them, not debate linguistics” - please read Simon’s post for a another view!

——

Image credit: from Bing Image Create (DALL-E) using prompt “create an image showing an AI that is hallucinating”

Do AI Systems Lie, Hallucinate, or Confabulate? (I’ll go for “lying”)

AI-hallucinations-med.jpgWhen ChatGPT and similar systems started being available, people noticed right away that they could provide completely wrong answers. But they would do so in language that was so confident and plausible (because that is how they are designed).

Some people started to say “ChatGPT lies about information”.

But somewhat immediately, people started pushing back and saying that it isn’t “lying” because that implies sentience or consciousness. Say it is “lying” is “anthropomorphizing”, i.e. attributing human behavior to something that is very definitely not human.

Instead, some people said, let’s refer to this false information as “hallucinations”, as that is in fact a term used in AI research. So we say instead “ChatGPT hallucinates information.”

I personally like that term. It provides a way to explain to people that these AI tools just make stuff up!

But, as noted in this excellent Ars Technica article by Benj Edwards (that you really need to read to understand all this!), the use of “hallucination” has two issues:

  • It also is anthropomorphizing and ascribing human behavior to a non-sentient / non-human thing.
  • More importantly, saying an AI “hallucinates” has a nuance of being excusable behavior. “Oh, yes, Fred was just hallucinating when he said all that.” As if it was just random memories or a trip on some kind of drugs. It lets the AI creators off the hook a bit. They don’t have to take responsibility for their errors, because “it’s just the AI hallucinating”!

Which is fine… I can go along with that reasoning.

But… the author then suggests instead we use the term from psychology of “confabulation”, as in:

”ChatGPT confabulates information”

Hmm. While I get that “confabulation” may be more technically accurate, I think it still has the issues:

  • It is still anthropomorphizing.
  • It still lets developers not take responsibility. “Oh, it’s just the AI confabulating.”

But more importantly… “confabulation” is NOT A WORD PEOPLE REGULARLY USE!

At least, people who are not in psychology.

If we as technologists want to help the broader public understand these AI systems, both their opportunities and challenges, then we need to speak in plain language.

I do think we need to go back to the beginning and just say “ChatGPT lies”.

This has two important aspects:

  • All of us understand “lying”.
  • It puts the responsibility on the AI system - and its developers - for “behaving” that way.

Yes, it’s anthropomorphizing. No, ChatGPT and other AI systems are NOT human or sentient. No, they can’t really “lie” in the human understanding of it.

But we can use that term to help people understand what is happening here.

ChatGPT and other systems are lying. They are NOT giving you true information.

Let’s call it like it is.

——

P.S. It turns out that Simon Willison, who has been diving deep into the world of AI far more than I, has written something similar: “We need to tell people ChatGPT will lie to them, not debate linguistics” - please read Simon’s post for a another view!

——

Image credit: from Bing Image Create (DALL-E) using prompt “create an image showing an AI that is hallucinating”

With so many "Dan Yorks", ChatGPT’s interview questions are very generic

three question marks on a blue grey background

After Simon Willison wrote about how accurate questions from ChatGPT were when he asked “I am interviewing Simon Willison. What are some good questions I can ask them?”, I had to give it a try. Simon’s had some questions specific to the areas he works in, web development and open source, so I was curious to see what might be in one with my name.

I was particularly curious because there are so many “Dan York”s out there on the Internet.

Alas, my response from ChatGPT was boringly generic:

I am interviewing Dan York. What are some good questions I can ask them?

  1. Can you tell us about your background and how you became involved in your current industry/field?
  2. Can you walk us through a recent project or accomplishment that you are particularly proud of?
  3. How do you stay current and informed about developments in your industry?
  4. Can you discuss a challenge or obstacle you have faced in your career and how you overcame it?
  5. How do you prioritize and manage your workload and responsibilities?
  6. How do you approach problem-solving and decision-making in your role?
  7. Can you discuss a time when you had to adapt to a change or new technology in your field?
  8. Can you talk about a team or collaboration experience and the role you played in its success?
  9. Can you describe a specific way in which you have contributed to improving processes or efficiencies in your organization?
  10. What are your long-term career goals and how do you plan to achieve them?

This could be for any of the many Dan Yorks who are out there. 😀

(But not a bad set of questions…)

So then I decided to try to be a bit more specific. I next associated my name with my employer, the Internet Society. The results were actually decent questions to ask anyone associated with the Internet Society (but not necessarily directly with me):

I am interviewing Dan York of the Internet Society. What are some good questions I can ask them?

  1. Can you tell us about the mission and goals of the Internet Society?
  2. How does the Internet Society work to promote and protect internet freedom and human rights?
  3. Can you speak to the role of the Internet Society in addressing the digital divide and ensuring access to the internet for all?
  4. How does the Internet Society work to promote and support a free and open internet?
  5. Can you discuss any current initiatives or projects that the Internet Society is working on?
  6. How does the Internet Society collaborate with other organizations and governments to advance internet policy and governance?
  7. Can you speak to the importance of a multi-stakeholder approach in addressing internet-related issues?
  8. How does the Internet Society work to promote and support internet innovation and the development of new technologies?
  9. Can you discuss any challenges or obstacles the Internet Society faces in achieving its mission and goals?
  10. How can individuals and organizations get involved and support the work of the Internet Society?

Next I decided to try specifying the state where I live, Vermont. Maybe there’s another Dan York in Vermont who owns a business, because I don’t, and so these questions were completely off for me. They could be anyone living in Vermont who owns a business:

I am interviewing Dan York of Vermont. What are some good questions I can ask them?

  1. What inspired you to start your business in Vermont?
  2. What challenges have you faced as a business owner in Vermont?
  3. How do you see Vermont's economy and business landscape evolving in the next few years?
  4. Can you tell me about a project or accomplishment that you're particularly proud of in your business?
  5. How do you work with other local businesses and organizations in Vermont to support the community?
  6. How do you stay informed and educated on changes in Vermont's business regulations and policies?
  7. What advice would you give to someone who is considering starting a business in Vermont?
  8. How do you see Vermont's unique culture and environment impacting your business?
  9. Can you tell me about a specific way your business is environmentally or socially conscious?
  10. How do you see Vermont's location impacting your business in terms of logistics and transportation?

Just for grins I decided to go directly into associating my name with one of my blogs. The questions are again a good set of generic questions that could be for anyone’s blog:

I am interviewing Dan York who writes the blog "Disruptive Conversations". What are some good questions I can ask them?

  1. Can you tell us about your background and how you got started writing the "Disruptive Conversations" blog?
  2. What is the main focus of your blog and what topics do you typically cover?
  3. Can you share some of your most popular or impactful blog posts and what you think made them successful?
  4. How do you stay current with the latest trends and developments in the industry you write about?
  5. What are some of the most common challenges you face as a blogger, and how do you overcome them?
  6. How do you engage with your readers and build a community around your blog?
  7. Can you discuss any notable collaborations or partnerships you've had as a result of your blog?
  8. How do you measure the success of your blog, and what are your goals for its future?
  9. How do you stay motivated to continuously write and publish quality content on your blog?
  10. Can you share some tips for those who are interested in starting their own blog or writing content?

And that was as far as I decided to go.

All of these, though, are useful sets of questions. If someone were looking to do an interview in these areas, the questions are not a bad place to start.

Interesting to see various uses of ChatGPT.

The Christmas Story, As Told by ChatGPT in Iambic Pentameter

the text "An AI christmas story" on a red and green gradient background

For those who celebrate Christmas, as I do… MERRY CHRISTMAS! 

As I am off celebrating the holiday with my family, I thought I’d ask an AI to tell the Bible’s Christmas story. So I went to ChatGPT and asked:

> Tell the biblical Christmas story in iambic pentameter

Why iambic pentameter? I don’t know… other than thinking it could be more interesting than just having Chat GPT regurgitate blocks of text. This requires a little bit more effort.

After getting the first version, I found the focus on Herod to be a bit odd, so I pressed the “Regenerate response” to get a second version. Then I did it three more times, and decided to include all five versions here. I find the differences very interesting, as each takes a different view on the story, and includes or excludes different aspects of the story. For instance:

  • Herod, Egypt, and Nazareth only show up in version 1.
  • The shepards only appear in 2, 3, and 4.
  • The wise men are only in 2 and 3.
  • Gold, frankincense and myrrh only appear in 2.
  • Angels only appear in 3 and 4.
  • Version 5 has almost no details but instead focuses on the religious meaning.
  • None of them mentioned why Joseph and Mary where traveling (to register for a census in his home town for tax purposes for the Roman government).

It also shows that you can’t just take ChatGPT’s answer as gospel (quite literally). Imagine if you asked ChatGPT and got only version 1? You would have a very different view of the story from the other ones.

No matter which story you may or may not agree with, including none of these, I wish you all a very merry Christmas!


Version 1

Once in the land of Judah, long ago
A virgin fair was betrothed to Joseph, a carpenter
But in her womb, a child was set to grow
Conceived by power of the Holy Ghost, as told by angel Gabriel

This child would be the savior of us all
A king, descended from the line of David
His birth had been foretold by prophets tall
And now, the time had come, the promise given

But in the land, a ruler did hold sway
A cruel and wicked man, by name of Herod
He heard the news of this newborn king at play
And in his jealousy and fear, did order

That every male child, two years and under
Should be put to death, in Bethlehem and near
To try and kill this child, the Son of Thunder
But Joseph took Mary and the babe in fear

And fled to Egypt, till the danger passed
They stayed in hiding, till the coast was clear
Then returned to Israel, at last
And settled in a town called Nazareth

Thus Jesus grew, in wisdom, age, and grace
And when the time was right, he took his place
As savior of the world, in love and peace
This is the story of the birth of Jesus, the Prince of Peace.


Version 2

In Bethlehem, a city of Judea,
A child was born, the savior of mankind.
This infant, wrapped in swaddling clothes so new,
Was laid in manger, for there was no room
In inns of Bethlehem, on that holy night.

The news of this great birth was quickly spread
To shepherds in the fields, who did not stay
But hurried to the stable, where they found
The newborn king, lying in a bed of hay.

The shepherds marveled at the sight they saw,
And they returned to their fields, praising God
For sending them the savior of the world.

But news of this great birth was not confined
To shepherds in the fields or Bethlehem's town.
Three wise men, guided by a shining star,
Came from the East, bearing gifts of gold,
Frankincense, and myrrh, to honor him.

And so, on this most holy night, the world
Was blessed with the arrival of the one
Who came to save us all, the son of God,
Jesus, the Christ, the savior of mankind.


Version 3

In Bethlehem, a town so small and poor,
A child was born to Mary, pure and fair.
This blessed babe, the Son of God, was sure
To bring salvation, love, and mercy rare.

The angels sang, and shepherds came to see
This newborn king, in stable low and mean.
Three wise men, led by a bright star, did flee
To offer gifts, and worship on their knees.

This holy child, so pure and innocent,
Was sent to save us all from sin and death.
He lived and loved and healed with great intent,
And willingly submitted to his earthly death.

But death could not contain him, as was plain,
For he rose again, and lives forevermore.
He brings salvation, hope, and joy, and reigns
As Lord of all, forever and always.


Version 4

The Christmas story, as told in the Bible,
Is one of joy and love, a miracle true:
It starts with Mary, chosen by the Lord
To be the mother of His only son,
A son who would redeem the world from sin.

The angel Gabriel appeared to Mary
And told her she would bear a son, and he
Would be the savior of the world, a king
Descended from the line of David, great.
Mary, though frightened, knew that this was right,
And so she said, "I am the Lord's servant.
May it be to me as you have said.”

The time came for Mary to give birth, and so
She and her husband Joseph traveled far
To Bethlehem, the city of David's birth.
But when they arrived, they found no room
In any inn, and so they had to stay
In a stable, where the baby Jesus lay.

The shepherds, watching over their sheep by night,
Were startled by a bright and shining light,
And saw an angel, who told them the news
Of Jesus' birth, and said, "Do not be afraid.
I bring you tidings of great joy today:
A savior has been born in Bethlehem, a boy.”

The shepherds went to see the baby Jesus,
And when they saw him lying in the manger,
They knew that this was the son of God, and so
They spread the word throughout the countryside.
And so it was that Christmas came to be
A day of joy and celebration, free
From sin and sorrow, as the world rejoiced
In the birth of Jesus, our beloved Lord.


Version 5

The Christmas story, as told in the Bible, is a tale of hope and redemption. It begins with the birth of Jesus, the Son of God, in a humble stable in Bethlehem.

Joseph and Mary, betrothed in marriage,
Were traveling to the town of David's line
When Mary, heavy with the child she'd carry,
Was forced to stop and give the baby time.

And so, in poverty and low estate,
The Son of God was born, a lowly king.
But though His birth was humble and sedate,
His purpose was to save the world, and bring

Salvation to all those who would believe.
For He was sent to be the sacrifice,
To bear the weight of sin upon His sleeve
And through His death, bring us to paradise.

And so the Christmas story, full of grace,
Brings hope and joy to all who seek His face.