Just a guy in Vermont trying to connect all the dots...
Author's posts
Oct 26
Slides for my ISC2 Security Congress session on “Demystifying Routing Security”
Today at the ISC2 Security Congress 2023 in Nashville, TN, I gave a well-received talk on "Demystifying the World of Routing Security". Unfortunately, the mobile app for the event had (and still has) the wrong set of slides. Instead of mine, the attached deck was for a 2019 talk. So I told participants I would put the slides up on one of my sites. And here they are:
As you will see, a great amount of the slides are about the Mutually Agreed Norms for Routing Security (MANRS) initiative.
Also, for people seeking info about how to be involved with the "MANRS+" effort, the link is: https://www.manrs.org/about/manrs-working-group/
Thanks to all who attended - and especially to the five who helped me with the on-stage demonstration. 😀
Oct 21
Blue hat
Just a photo of my blue Rotary hat. (Testing something out with image uploads – this IS my test site after all. M)
Oct 13
Migration in Blue
Looked up the other day and there was a flock of geese going across the blue sky…
Sep 30
No More Status
Booking airline tickets for my first business trip since December 2019, I get down to the part where United notes your status level and it says…
“General”
No “Premier” status of any level.
Which makes total sense given:
2020 - 0 flights
2021 - 0 flights
2022 - 0 flights
2023 - 2 flights (so far)
🤣
I totally understand why I am now just a “general” traveler. 😀
Sep 22
The Curious Aspect of Facebook Supporting Multiple Personas
I find it fascinating that Meta just announced the ability of Facebook users to have multiple accounts attached to their single Facebook account. So you can have different “personas” for interacting with different communities differently.
Now, this is nothing very new. We’ve had this in the Fediverse since its beginnings. You can have as many accounts on different instances as you want. And many apps let you seamlessly switch between them. I use the Ice Cubes app for Mastodon on my mobile devices, and with the tap on an icon in the lower right corner of the app, I can switch to a different profile. Other social media services have had this capability, too.
But why I find this fascinating is that my memory is that for so long, Facebook did NOT want you to do this. They promoted the notion that you used your “real name” and that Facebook was a place where you could go to interact with real people, not potentially anonymous people. And in fact they seemed to encourage the blending and blurring of work and personal lives.
I remember this being a big deal to them - and something that differentiated Facebook from other services that allowed anonymity or pseudonymity.
Or at least that is what I remember. And so it is fascinating to see the pivot to allowing people to have different accounts for different facets of their lives. Which DOES reflect the reality of how most of us like to interact with people online.
Whether this incentivizes more people to use Facebook, I don’t know. I’ve decreased my time there mostly because of their extremely privacy-invasive systems. Multiple personas will not bring me back. But I am only one person. What about you? Will this make you do anything more on Facebook?
Sep 21
Techxit: The UK Declares Its Exit from the High-Tech Startup World
No one in their right mind would now want to start up a high-tech company in the UK. With a last-minute addition to the Online Safety Bill (OSB), the UK government made it clear that startups are no longer welcome in the UK. Previously, the OSB applied to “regulated services” that had to be above […]
The post Techxit: The UK Declares Its Exit from the High-Tech Startup World appeared first on Internet Society.
Sep 07
TDYR 413 – Overcoming Fatigue and Malaise
May 01
43% of the Web Can No Longer (Easily) Auto-Share to Twitter
As of today, May 1, 2023, 43% of web sites will no longer be able to easily auto-share posts to Twitter. I’m referring, of course, to WordPress, which W3Techs shows as powering around 43% of all sites they scan.
Due to the continued incomprehensible decisions being made by Twitter’s new management, the company behind WordPress, Automattic, has stated that they have discontinued the easy auto-sharing of posts through their hosted WordPress.com service, and also through the Jetpack Social service used by many people (myself included) who operate their own WordPress instances.
The issue is that Twitter decided to start charging for API access, and as Automattic notes:
The cost increase is prohibitive for us to absorb without passing a significant price increase along to you, and we don’t see that as an option. We have attempted to negotiate a path forward, but haven’t been able to reach an agreement in time for Twitter’s May 1 cutoff.
When you publish a new post on WordPress.com or any WordPress site using Jetpack, it will no longer be automatically shared out to Twitter. You can, of course, manually copy and paste the URL from your site over into Twitter. And you can potentially use some other auto-sharing plugin that has decided to pay Twitter’s API fees.
Now of course all 43% of web sites using WordPress did NOT use this auto-sharing capability. Many sites did not, but many did - and this allowed Twitter to be the place where you could be notified when someone you followed published something new.
Of all the many ridiculous decisions Twitter’s management has made in the past six months, this excessive changing for API access seems to me to be one of the MOST short-sighted decisions.
One of the reasons I used Twitter was to get the latest news and content. Now Twitter is reducing the amount of content that will be shared. The API limits are expected to affect public service announcements - and now will affect the sharing of blog posts.
I get that Twitter’s new owners desperately need to figure out ways to make money, but this doesn’t seem to be the right one.
In my mind, if you want your social service to be THE place for people to go for the latest news and content, then you want to reduce any friction involved with posting content INTO your service.
The reality is that you (Twitter) need that content far more than the content providers need you!
The Good News
There was some good news in the post from Automattic - specifically that they will soon be adding Mastodon auto-sharing, as well as Instagram:
However, we’re adding Instagram and Mastodon very soon. In the meantime, auto-sharing to Tumblr, Facebook, and LinkedIn still works as expected
I don’t personally care as much about the IG linkage, but the Mastodon auto-sharing will be hugely helpful, as that is where I am spending most of my social time these days. There are no API fees there, and content can be shared in many ways.
You can already do this auto-sharing to Mastodon using ActivityPub plugins, but this announcement indicates it will be brought more into the main WordPress / Jetpack functionality, which will make it that much easier for people to use.
I look forward to trying the Mastodon sharing out when it becomes available!
Meanwhile… this announcement means there are even fewer reasons for me to be checking Twitter anymore. Sad to see the continued decline. 🙁
Apr 08
Do AI Systems Lie, Hallucinate, or Confabulate? (I’ll go for “lying”)
When ChatGPT and similar systems started being available, people noticed right away that they could provide completely wrong answers. But they would do so in language that was so confident and plausible (because that is how they are designed).
Some people started to say “ChatGPT lies about information”.
But somewhat immediately, people started pushing back and saying that it isn’t “lying” because that implies sentience or consciousness. Say it is “lying” is “anthropomorphizing”, i.e. attributing human behavior to something that is very definitely not human.
Instead, some people said, let’s refer to this false information as “hallucinations”, as that is in fact a term used in AI research. So we say instead “ChatGPT hallucinates information.”
I personally like that term. It provides a way to explain to people that these AI tools just make stuff up!
But, as noted in this excellent Ars Technica article by Benj Edwards (that you really need to read to understand all this!), the use of “hallucination” has two issues:
- It also is anthropomorphizing and ascribing human behavior to a non-sentient / non-human thing.
- More importantly, saying an AI “hallucinates” has a nuance of being excusable behavior. “Oh, yes, Fred was just hallucinating when he said all that.” As if it was just random memories or a trip on some kind of drugs. It lets the AI creators off the hook a bit. They don’t have to take responsibility for their errors, because “it’s just the AI hallucinating”!
Which is fine… I can go along with that reasoning.
But… the author then suggests instead we use the term from psychology of “confabulation”, as in:
”ChatGPT confabulates information”
Hmm. While I get that “confabulation” may be more technically accurate, I think it still has the issues:
- It is still anthropomorphizing.
- It still lets developers not take responsibility. “Oh, it’s just the AI confabulating.”
But more importantly… “confabulation” is NOT A WORD PEOPLE REGULARLY USE!
At least, people who are not in psychology.
If we as technologists want to help the broader public understand these AI systems, both their opportunities and challenges, then we need to speak in plain language.
I do think we need to go back to the beginning and just say “ChatGPT lies”.
This has two important aspects:
- All of us understand “lying”.
- It puts the responsibility on the AI system - and its developers - for “behaving” that way.
Yes, it’s anthropomorphizing. No, ChatGPT and other AI systems are NOT human or sentient. No, they can’t really “lie” in the human understanding of it.
But we can use that term to help people understand what is happening here.
ChatGPT and other systems are lying. They are NOT giving you true information.
Let’s call it like it is.
——
P.S. It turns out that Simon Willison, who has been diving deep into the world of AI far more than I, has written something similar: “We need to tell people ChatGPT will lie to them, not debate linguistics” - please read Simon’s post for a another view!
——
Image credit: from Bing Image Create (DALL-E) using prompt “create an image showing an AI that is hallucinating”
Apr 08
Do AI Systems Lie, Hallucinate, or Confabulate? (I’ll go for “lying”)
When ChatGPT and similar systems started being available, people noticed right away that they could provide completely wrong answers. But they would do so in language that was so confident and plausible (because that is how they are designed).
Some people started to say “ChatGPT lies about information”.
But somewhat immediately, people started pushing back and saying that it isn’t “lying” because that implies sentience or consciousness. Say it is “lying” is “anthropomorphizing”, i.e. attributing human behavior to something that is very definitely not human.
Instead, some people said, let’s refer to this false information as “hallucinations”, as that is in fact a term used in AI research. So we say instead “ChatGPT hallucinates information.”
I personally like that term. It provides a way to explain to people that these AI tools just make stuff up!
But, as noted in this excellent Ars Technica article by Benj Edwards (that you really need to read to understand all this!), the use of “hallucination” has two issues:
- It also is anthropomorphizing and ascribing human behavior to a non-sentient / non-human thing.
- More importantly, saying an AI “hallucinates” has a nuance of being excusable behavior. “Oh, yes, Fred was just hallucinating when he said all that.” As if it was just random memories or a trip on some kind of drugs. It lets the AI creators off the hook a bit. They don’t have to take responsibility for their errors, because “it’s just the AI hallucinating”!
Which is fine… I can go along with that reasoning.
But… the author then suggests instead we use the term from psychology of “confabulation”, as in:
”ChatGPT confabulates information”
Hmm. While I get that “confabulation” may be more technically accurate, I think it still has the issues:
- It is still anthropomorphizing.
- It still lets developers not take responsibility. “Oh, it’s just the AI confabulating.”
But more importantly… “confabulation” is NOT A WORD PEOPLE REGULARLY USE!
At least, people who are not in psychology.
If we as technologists want to help the broader public understand these AI systems, both their opportunities and challenges, then we need to speak in plain language.
I do think we need to go back to the beginning and just say “ChatGPT lies”.
This has two important aspects:
- All of us understand “lying”.
- It puts the responsibility on the AI system - and its developers - for “behaving” that way.
Yes, it’s anthropomorphizing. No, ChatGPT and other AI systems are NOT human or sentient. No, they can’t really “lie” in the human understanding of it.
But we can use that term to help people understand what is happening here.
ChatGPT and other systems are lying. They are NOT giving you true information.
Let’s call it like it is.
——
P.S. It turns out that Simon Willison, who has been diving deep into the world of AI far more than I, has written something similar: “We need to tell people ChatGPT will lie to them, not debate linguistics” - please read Simon’s post for a another view!
——
Image credit: from Bing Image Create (DALL-E) using prompt “create an image showing an AI that is hallucinating”