Monday, May 29, 2023

Designing (Healthcare) Via Roblox

Here’s a question: what medical schools are incorporating Roblox into their curriculum? 

Interested readers can get back to me, but in the meantime I’m guessing none.  At best, very few.  And instead of “medical schools” feel free to insert kind of “healthcare institutions/organization” that is interested in educating or training – which is to say, all of them.  By way of contrast, I was intrigued by the collaboration between Roblox and The Parsons School of Design.

Credit: The New School/Parsons

Perhaps you don’t know about Roblox, a creator platform whose vision is “to reimagine the way people come together to create, play, explore, learn, and connect with one another.”  As their website says: “We don’t make Roblox.  You do.” It claims to have almost 10 million developers using its platform, hosting some 50 million “experiences.”  

I first wrote about it in 2021, astonished that over half of American children used it, with some 37 million unique daily users. Today it has over 66 million unique daily users -- some 214 million monthly active users.   The vast majority of the users – as much as 80% -- are under 16, a fact Roblox is acutely aware of and is seeking to change. 

Parson and Roblox announced the collaboration last November.  “Partnering with Roblox offers Parsons students working in creative technologies an exciting opportunity to engage the complex intersection of visual culture and social structure, and to play with how we make meaning when we dress ourselves – in digital and physical worlds,” said Shana Agid, PhD, Dean of the School of Art & Media Technology.  The 16 week course culminated in a digital fashion showcase earlier this month.

“We as a university wanted to work on this project because we want to learn what skill set students need to be successful on this platform,” Professor Kyle Li said. “[Roblox is] also interested in shifting their audience from 12 and younger to 17 to 24. And I thought, ‘We have the perfect specimen to test all those things.” As The Verge reported, “The Parsons course is an extension of Roblox trying to prove that it’s a viable and legitimate tool for adult life.”  Roblox Founder and CEO David Baszucki is clear on this point: “Our goal is one platform, where age-appropriate experiences for every life stage can be found.”

Most of the Parsons students had not used Roblox prior to the course, but learned how digital design brings both new opportunities and limitations to their fashion expertise. “Working in digital gives you so much freedom in terms of the structures you want to have,” one student told The Verge. Another student told The Wall Street Journal: “You can make crazier looks for less money in the digital world.  Fabrics are expensive.

Digital fashion is nothing new, whether in Roblox, gaming, or other Metaverse iterations.  For example, designer Rebecca Minkoff recently launched a collection for Roblox, noting this about digital fashion: “I don’t think this is going to go away.”

Other design schools, such as Drexel, Fashion Institute of Technology, Pratt Institute, Savannah College of Art and Design, offer courses in digital design/metaverse.  Epic Games just invested in a digital fashion company. And Roblox recently started letting creators make money from selling limited-run avatar gear.


Now, I don’t care all that much about fashion generally, even less about digital fashion, but I am hugely interested in what appeals to younger generations and the inevitable movement to a more digital economy.  And, I have to note, Roblox is interested not just in an older audience but also in healthcare in particular. A few examples:

  • Early last year Akili Interactive partnered with Roblox to offer EndeavorRx®. its prescription video game treatment. Eddie Martucci, CEO and Co-Founder of Akili Interactive noted: “Roblox has changed how millions learn, work, connect and play, and we are excited to work together to further push the boundaries of our industries and continue to redefine the experience of medicine.”
  • Last fall Philips Norelco rolled out Shavetopia in Roblox, as part of its broader Movember program promoting men’s physical and mental health. “We launched Shavetopia to extend the social conversation around Movember beyond the physical world and into the digital world,” said marketing director Viestel da Silva.
  • Early this month Roblox Founder and CEO David Baszucki and his wife made a philanthropic gift to Stony Brook University so that biomedical engineer and neuroscientist Lilianne Mujica-Parodi can develop Neuroblox, a software program inspired by Roblox. The platform hopes “to open up a world of modeling possibilities for neuroscientists without training in computational sciences.” E.g., Roblox for neuroscientists.
  • The American Heart Association is allowing its Heart Hero characters to be used for 30 days in Roblox game Race Clicker.  AHA says: “This is an important opportunity for the American Heart Association to meet kids where they are to share the benefits of mental and physical health to help them grow to reach their full potential.

And, of course, there are various health or health-related games and experiences offered on the platform.

---------------

We’re failing our kids generally when it comes to their health.  We have a teen mental health crisis, fueled in no small part by social media. More than 40% of school-aged children have at least one chronic condition. The anti-vaxx movement, which was envigored but not started by COVID, could have devastating long-term impacts, particularly on children.  And, of course, our healthcare system’s fumbling efforts towards more digital tools and interfaces baffle, frustrate, and turn off young people. 

If healthcare thinks it is reaching young people through, say, Facebook, it is badly misreading its audience and badly underestimating how poorly Facebook protected patient data.  If it wants to reach young people, it’s got to be thinking about gaming, Raspberry Pi, Scratch, TikTok,– and Roblox. 

Think back to Roblox’s vision -- “to reimagine the way people come together to create, play, explore, learn, and connect with one another.”  -- and tell me which of those goals you wouldn’t want a healthcare organization to share.  Think about how Parsons is using Roblox to give its students new tools to approach fashion design, and tell me why medical schools and other healthcare institutions/organizations shouldn’t also be giving healthcare professionals similar tools to approach healthcare differently, like Dr. Mujica-Parodi is doing.  Think about how healthcare needs to be more relevant to young people and tell me why Roblox wouldn’t help. 

As I said before, I don’t know what a healthcare Roblox would look like.  But I sure hope someone starts to figure it out -- soon.

Monday, May 22, 2023

AI Is Bright But Can Also Be Dark

If you’ve been following artificial intelligence (AI) lately – and you should be – then you may have started thinking about how it’s going to change the world. In terms of its potential impact on society, it’s been compared to the introduction of the Internet, the invention of the printing press, even the first use of the wheel. Maybe you’ve played with it, maybe you know enough to worry about what it might mean for your job, but one thing you shouldn’t ignore: like any technology, it can be used for both good and bad. 

If you thought cyberattacks/cybercrimes were bad when done by humans or simple bots, just wait to see what AI can do.  And, as Ryan Health wrote in Axios, “AI can also weaponize modern medicine against the same people it sets out to cure.

We may need DarkBERT, and the Dark Web, to help protect us.

Credit: Help Net Security

A new study showed how AI can create much more effective, cheaper spear phishing campaigns, and the author notes that the campaigns can also use “convincing voice clones of individuals.”  He notes: “By engaging in natural language dialog with targets, AI agents can lull victims into a false sense of trust and familiarity prior to launching attacks.” 

It’s worse than that. A recent article in The Washington Post warned:

That is just the beginning, experts, executives and government officials fear, as attackers use artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal.
The outdated architecture of the internet’s main protocols, the ceaseless layering of flawed programs on top of one another, and decades of economic and regulatory failures pit armies of criminals with nothing to fear against businesses that do not even know how many machines they have, let alone which are running out-of-date programs.

Credit: Reuters/Kacper Pempel illustration

Health care should be worried too. The World Health Organization (WHO) just called for caution in use of AI in health care, noting that, among other things, AI could “generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors…generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.”

It's going to get worse before it gets better; the WaPo article warns: “AI will give far more juice to the attackers for the foreseeable future.”  This may be where solutions like DarkBERT come in.

Now, I don’t know much about the Dark Web. I know vaguely that it exists, and that people often (but don’t exclusively) use it for bad things.  I’ve never used Tor, the software often used to keep activity on the Dark Web anonymous.  But some clever researchers in South Korea decided to create a Large Language Model (LLM) trained on data from the Dark Web – fighting fire with fire, as it were. This is what they call DarkBERT.

The researchers went this route because: “Recent research has suggested that there are
clear differences in the language used in the Dark Web compared to that of the Surface Web.”  LLMs trained on data from the Surface Web were going to miss or not understand much of what was happening on the Dark Web, which is what some users of the Dark Web are hoping. 

Credit: Jin, et. alia
I won’t try to explain how they got the data or trained DarkBERT; what is important is their conclusion: “Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.”

They demonstrated DarkBERT’s effectiveness against three potential Dark Web problems:

  • Ransomware Leak Site Detection: identifying “the selling or publishing of private, confidential data of organizations leaked by ransomware groups.”
  • Noteworthy Thread Detection: “automating the detection of potentially malicious
  • threads.”
  • Threat Keyword Inference: deriving “a set of keywords that are semantically related to threats and drug sales in the Dark Web.”

On each task, DarkBERT was more effective than comparison models. 

The researchers aren’t releasing DarkBERT more broadly yet, and the paper has not yet been peer reviewed.  They know they still have more to do: “In the future, we also plan to improve the performance of Dark Web domain specific pretrained language models using more recent architectures and crawl additional data to allow the construction of a multilingual language mode.”

Still, what they demonstrated was impressive. Geeks for Geeks raved:

DarkBERT emerges as a beacon of hope in the relentless battle against online malevolence. By harnessing the power of natural language processing and delving into the enigmatic world of the dark web, this formidable AI model offers unprecedented insights, empowering cybersecurity professionals to counteract cybercrime with increased efficacy.

It can’t come soon enough.  The New York Times reports there is already a wave of entrepreneurs offering solutions to try to identify AI-generated content – text, audio, images, or videos – that can be used for deepfakes or other nefarious purposes.  But the article notes that it’s like antivirus protection; as AI defenses get better, the AI generating the content gets better too.  Content authenticity is going to become a major problem for society as a whole,” one such entrepreneur admitted.

When even Sam Altman and other AI leaders are calling for AI oversight, you know this is something we all should worry about. As the WHO warned, “there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. Our enthusiasm for AI’s potential is outstripping our ability to ensure our wisdom in using them.

Credit: Natalie Peeples/Axios

Some experts have recently called for an Intergovernmental Panel on Information Technology – including but not limited to AI – to “consolidate and summarize the state of knowledge on the potential societal impacts of digital communications technologies,” but this seems like a necessary but hardly sufficient step. 

Similarly, the WHO has proposed their own guidance for Ethics and Governance of Artificial Intelligence for Health.  Whatever oversight bodies, legislative requirements, or other safeguards we plan to put in place, they’re already late.   

In any event, AI from the Dark Web is likely to ignore and try to bypass any laws, regulations, or ethical guidelines that society might be able to agree to, whenever that might be.  So I’m cheering for solutions like DarkBERT that can fight it out with whatever AI emerges from there. 

Monday, May 15, 2023

Healthcare: Make Better Mistakes

I saw an expression the other day that I quite liked. I’m not sure who first said it, and there are several versions of it, but it goes something like this: let’s make better mistakes tomorrow.

Boy howdy, if that’s not the perfect motto for healthcare, I don’t know what is.


Health is a tricky business.  It’s a delicate balancing act between – to name a few -- your genes, your environment, your habits, your nutrition, your stress, the health and composition of your microbiome, the impact of whatever new microbes are floating around, and, yes, the health care you happen to receive.

Health care is also a tricky business. We’ve made much progress in medicine, developed deeper insights into how our bodies work (or fail), and have a multitude of treatment options for a multitude of health problems. But there’s a lot we still don’t know, there’s a lot we know but aren’t actually using, and there’s an awful lot we still don’t know.

It’s very much a human activity. Different people experience and/or report the same condition differently, and respond to the same treatments differently. Everyone has unique comorbidities, the impact of which upon treatments is still little understood. And, of course, until/unless AI takes over, the people responsible for diagnosing, treating, and caring for patients are very much human, each with their own backgrounds, training, preferences, intelligence, and memory – any of which can impact their actions.

All of which is to say: mistakes are made. Every day. By everyone.

Patients don’t disclose pertinent information, or don’t follow recommendations.  Clinicians get tired, don’t make important connections, don’t see/remember applicable research. People input incorrect information, or information gets processed incorrectly. Algorithms fail to take into account differing populations. Some people just aren’t very good at their jobs; perhaps they never were, perhaps they’ve failed to keep up, perhaps physical or mental issues have degraded their abilities.

No one really knows how many mistakes are made in healthcare, or exactly what the implications of those mistakes are on patients (although many estimates have been made for both), but on this we should all be able to agree: there’s too many.  Maybe someday we’ll have perfect health and perfect health care – such as when our uploaded digital twins are treated by AI clinicians – but until that time we have to accept that there are going to be mistakes.  

We should strive for no mistakes, or at least to minimize them, but, for heaven’s sake, the very least we should resolve is to try to make better mistakes.

There are many things we would probably agree on to help accomplish this. Clinicians and other health care workers should get the appropriate amount of training, on an ongoing basis. We shouldn’t work them to the point of burnout. We should improve patients’ health literacy and health habits. None of that is controversial, but, unfortunately, we probably wouldn’t get a passing grade on any of them.

Mistakes are still going to happen. But if we’re still going to make them, here are some suggestions for people working in healthcare to keep in mind to at least make them better mistakes:

  • Does what you are doing make things simpler or more complex?  Some complexity is inevitable, but, by and large, making things simpler should result in fewer (and better)  mistakes. And, of course, one of my favorite pieces of advice: do simple better.
  • Does what you are doing giving patients more agency, or less?  Historically, patients have been expected to follow physicians’ advice, without question, but those days are over, or they should be.  Helping patients help themselves should lead to better mistakes.
  • Does what you are doing treat the condition, or the person?  Over a hundred year ago, Dr. William Osler said: “The good doctor treats the disease; the great doctor treats the patient who has the disease.” That kind of “greatness” should lead to better mistakes. The role of the primary care physician to oversee and coordinate all of a patient’s conditions and care has largely been lost, as has anyone’s overall view of the patient. Trying to have as broad an understanding of patients should lead to better mistakes.
  • Do people complain a lot about something you do? If enough people tell you they don’t like something, maybe you shouldn’t be doing that, in that way. The classic example is mammograms; no woman I know likes them, although they’re relentlessly urged to get them, so why haven’t we figured out less unpleasant options?  Pre-authorizations fall into the same category, as would narrow networks, excessive charges, or requiring redundant/excessive forms.  Reducing complaints should lead to better mistakes. Again, a great piece of advice: stop doing stupid stuff.
  • Does what you are doing make patients’ lives worse? If you’re taking patients to collections, you’re not making their lives better. If they have to choose between eating or buying prescriptions, you’re not making their lives better. If patients have to spend hours on the phone to make appointments or get questions answered, you’re not making their lives better. Thinking about making patients’ lives, not just their immediate health, better should lead to better mistakes.
  • Does what you are doing protect the people/institutions providing the care, or the people receiving it?  There are lots of examples for this, but the overarching one to me is that instead of a culture to identify and remediate mistakes, we have a malpractice culture that seeks to cover them up and forces an adversarial system on patients. Similarly, those forms patients blindly sign before care is rendered aren’t there to protect patients. The healthcare system is supposed to serve patients, not exist to support health care workers and institutions.  Remembering that should lead to better mistakes.

-------------

Reform comes slowly, if at all, to our healthcare system. Many of us would like to completely revamp and rebuild it, but at this point it’d be like trying to rebuild a plane while in flight. We can’t get off the plane and we’re not prepared to have it crash. So, if we can’t have a whole new healthcare system, one without all the perverse incentives and structural mistakes, perhaps the least we can strive for is to make better mistakes in the one we have.

Monday, May 8, 2023

Would You Picket Over AI?

I’m paying close attention to strike by the Writers Guild Of America (WGA), which represents “Hollywood” writers.  Oh, sure, I’m worried about the impact on my viewing habits, and I know the strike is really, as usual, about money, but what got my attention is that it’s the first strike I’m aware of where impact of AI on their jobs is one of the key issues.

It may or may not be the first time, but it’s certainly not going to be the last.


The WGA included this in their demands: “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” I.e., if something – a script, treatment, outline, or even story idea – warrants a writing credit, it must come from a writer.  A human writer, that is.

John August, a screenwriter who is on the WGA negotiating committee, explained to The New York Times: “A terrible case of like, ‘Oh, I read through your scripts, I didn’t like the scene, so I had ChatGPT rewrite the scene’ — that’s the nightmare scenario,”

The studios, as represented by the Alliance of Motion Picture and Television Producers (AMPTP), agree there is an issue: “AI raises hard, important creative and legal questions for everyone.” It wants both sides to continue to study the issue, but noted that under current agreement only a human could be considered a writer. 

Still, though, we’ve all seen examples of AI generating remarkably plausible content.  “If you have a connection to the internet, you have consumed AI-generated content,” Jonathan Greenglass, a tech investor, told The Washington Post. “It’s already here.”  It’s easy to imagine some producer feeding an AI a bunch of scripts from prior instalments to come up with the next Star Wars, Marvel universe, or Fast and Furious release.  Would you really know the difference? 

Illustration by Greg Clarke/The Hollywood Reporter
Sure, maybe AI won’t produce a Citizen Kane or The Godfather, but, as Alissa Wilkinson wrote in Vox: “But here is the thing: Cheap imitations of good things are what power the entertainment industry. Audiences have shown themselves more than happy to gobble up the same dreck over and over.” 

Still, though, all of Hollywood should be nervous.  AI can already duplicate actors’ voices, and is getting good at generating digital images of them too.  We’ve seen actors “de-aged,” and it’s only a matter of time before we see actors – living or dead – appearing in scenes they never actually shot.  For that matter, we may not need camera operators, sound engineers, special effects experts, editors, gaffers, and the whole litany of people who also work on television shows and movies.  That includes directors and producers.    

The biggest barrier to more use of AI may not be AI capabilities or the WGA contract as it is that, under existing law, AI-generated works can’t be copyrighted, and the studios are going to be loathe to spend millions on something that doesn’t have that protection.

The AI jobs issue is not limited to Hollywood, of course.  Whether it’s music, photography, whatever the medium, there are creatives who are understandably and justifiably worried about the displacement of their livelihoods,” Ash Kernen, an entertainment and intellectual property attorney who focuses on new technology, told NBC News. And it’s much, much broader than that; for example, IBM says it is pausing hiring for jobs it thinks AI could do, impacting as many as 7,800 jobs already. 

“There was an assumption in the past that if you were a professional your skills were always going to be needed,” Patricia Campos-Medina, executive director of Cornell University’s Worker Institute, told Politico. “Now we’re starting to see the same level of insecurity … other workers have had to deal with since the Industrial Revolution.”

Credit: Shutterstock

If you are a “creative” worker, AI is coming for your job.  If you are a knowledge worker, AI is coming for your job.  If your job requires strength and/or skill, AI-powered robots will soon come for it too.  Even if your job requires you to demonstrate empathy – like, say, doctors – AI is coming for it.

“I think almost every job will change as a result of AI,” Tom Davenport, a professor of information technology and management at Babson College, told WaPo.  He added, though: “It doesn’t mean those jobs will go away.”  As Andy Kessler writes in the WSJ: “Will artificial intelligence destroy jobs? As sure as night follows day. Old jobs disappear and new jobs are created all the time.”

Some companies are trying to get a jump on how to incorporate AI without necessarily eliminating jobs.  A new study looked at a Fortune 500 company that incorporated generative AI in its customer service, and found it increased productivity by 14% on average, with the greatest impact on the least skilled and newest workers.  Plus, the authors claim: “AI assistance improves customer sentiment, reduces requests for managerial intervention, and improves employee retention.  Who’s afraid of AI now?

Well, every worker should be, to some extent.  Hollywood writers are lucky in that they have a union, and that union realizes there is an issue, but AI offers too much potential benefit to both the writers and the studios for them to try to keep AI away.  They just have to figure out what is in their mutual best interest, which is not going to be easy.

Maybe you agree with the AMPTP that this is an important issue, deserving more study.  Well, we don’t have the kind of time that study commissions usually take. We do need guardrails and even legislation – such as around privacy, fake information, and intellectual property – but the AI genie is already escaping the bottle.

Your job may not have a union, and you and your coworkers may not have had the time or expertise to really think about what AI might do to those jobs. Someone else will figure out the technology, we often tell ourselves, but that someone may not care about the impact on you, the person in that job.  But here’s the bottom line: if you can’t figure out how AI can enhance your job, chances are that AI will replace it.

In particular, whether patients are ready for it or whether  clinicians have figured out how to best use it, make no mistake: AI is coming to healthcare,

As for strikes, I’m more worried than once AI figures out what we do to some people, in health care and more generally, they’ll be the ones to go on strike.

Monday, May 1, 2023

Bluesky Ahead

I’ve been thinking about writing about Bluesky ever since I heard about the Jack Dorsey-backed Twitter alternative, and decided it is finally time, for two reasons. The first is that I’ve been seeing so many other people writing about it, so I’m getting FOMO.  The second is that I checked out Nostr, another Jack Dorsey backed Twitter alternative, and there’s no way I’m trying to write about that (case in point: Jack’s Nostr username is: npub1sg6plzptd64u62a878hep2kev88swjh3tw00gjsfl8f237lmu63q0uf63m.  Seriously).

Credit: Bluesky Social

It's not that I’ve come to hate Twitter, although Elon Musk is making it harder to like it, as it is that our general dissatisfaction with existing social media platforms makes it a good time to look at alternatives.  I’ve written about Mastodon and BeReal, for example, but Bluesky has some features that may make sense in the Web3 world that we may be moving into.

And, of course, I’m looking for any lessons for healthcare.

Bluesky describes itself as a “social internet.”  It started as a Twitter project in December 2019, with the aim “to develop an open and decentralized standard for social media.”   At the time, the ostensible goal was that Twitter would be a client of the standard, but events happened, Jack Dorsey left Twitter, Elon Musk bought it, and Bluesky became an independent LLC.  It rolled out an invite-only, “private beta” for iOS (Apple) users in March 2023, followed by an Android version in mid-April (again, invite-only).  People can sign up to be on the waitlist.  There are supposedly over 40,000 current users, with some million people reportedly on the waitlist.

Credit: Bluesky

By all accounts, it is similar to Twitter in many ways.  You can search for and follow other users, you can create posts (please don’t call them “skeets”) of 256 characters, you can attach (some types of) media, and you get a feed of suggested posts from other users.  You can like, reply to, or reshare posts.  It doesn’t yet have all of Twitter’s features, such as DMs or hashtags. It is working on “composable moderation.”

The point isn’t how much it looks and acts like Twitter but how different the underlying platform is.  It is built on what is called the AT Protocol – Authenticated Transfer Protocol. A blog post last fall explained what makes it unique:

Account portability. A person’s online identity should not be owned by corporations with no accountability to their users. With the AT Protocol, you can move your account from one provider to another without losing any of your data or social graph.

Algorithmic choice. Algorithms dictate what we see and who we can reach. We must have control over our algorithms if we're going to trust in our online spaces. The AT Protocol includes an open algorithms mode so users have more control over their experience.

Interoperation. The world needs a diverse market of connected services to ensure healthy competition. Interoperation needs to feel like second nature to the Web. The AT Protocol includes a schema-based interoperation framework called Lexicon to help solve coordination challenges.

Performance. A lot of novel protocols throw performance out of the window, resulting in long loading times before you can see your timeline. We don’t see performance as optional, so we’ve made it a priority to build for fast loading at large scales.

Credit: Bluesky
There’s a lot to unpack there – more than I’m qualified to do – but here are a couple of key takeaways.  Currently, you don’t have much, if any control, over your Twitter feed (or your other social media feeds); the platform algorithms dictate.  Bluesky promises that the AT Protocol will allow users to both know what algorithm is being used and to choose from a library of algorithms.  How users will understand the consequences of different choices is not clear.

CEO Jay Graber says:

Our goal isn't to create every algorithm in-house, but to enable the developer community to bring new algorithms to users swiftly and effortlessly…We want a future where you control what you see on social media. We aim to replace the conventional "master algorithm," controlled by a single company, with an open and diverse "marketplace of algorithms."

Equally important, Bluesky’s goal is that you could port your Bluesky experience – your list of followers, your historical feed, etc. – to other platforms (presumably that are also built on the AT Protocol).  It would tear down the “walled gardens” that have been recreated with existing social media platforms.

Kade Garrett, writing in Decrypt, points out: “In essence, the AT protocol would enable the creation of not a single social network, but a federation of social networks that could interact with each other…At a technical level, this would allow you to self-host the servers of your own company, profile, or social media platform.”  

As Bluesky tweeted: “We can switch mobile carriers without losing our phone numbers. If we could switch between social apps without losing our identity or social graph, then social media would be a competitive open market again.”  It sees switching platforms are more akin to changing mobile phone providers – keeping the same number – than switching email providers, which requires a new email address.

All this is based on what Bluesky refers to as “self-authenticating protocol,” which moves authority to authenticate from the host to the user.   But I’m going to leave the explanation of how that works as an exercise for the interested reader.  Mr. Garrett explains the importance of this: “The goal of such a design is to secure user data and make the user platform experience resistant to influence from corporations, governments, and other centralized entities.”

Who wouldn’t prefer that?

--------------

 

Social media is a mess.  Most platforms have been built on the you-are-the-product approach that has done untold damage to our privacy and to the level of our discourse. The kind of platform Bluesky seeks to be holds lots of appeal – although whether it can work, much less make a viable business, remains to be seen.

Healthcare has been a mess even longer than social media. Sure, it pays lips service to our privacy, but has failed to protect it (e.g., hospitals), and is only belatedly recognizing types of the loopholes it has (e.g., health trackers). We’re all subject to more algorithms than we realize (e.g., prior authorizations), and AI is going to exponentially increase that. It talks up interoperability, and now has FHIR and TEFCA, but if you think you are now in control of your data, you’re misguided.

I don’t know how, or even if, the AT Protocol could be used in healthcare, but healthcare sure needs something like it.