Monday, October 14, 2024

Make Mine Möbius

As both a long-ago math major and someone with an overdeveloped sense of whimsey, I’ve long been fascinated by Möbius strips. You know Möbius strips: they look like they should be two sided, but they are actually one sided (as you can test by tracing a line all the way around without lifting your pencil). They’re simple to make, but deceptively complex (mathematicians would add that it is a non-orientable surface with no boundaries, but let’s not go there).

The Gboard double-sided keyboard. Credit: Google Japan

You probably never found yourself thinking, hmm, I wish I had a keyboard that was a Möbius strip, but the good folks at Google Japan thought some of us wished we had a keyboard that we could type on both sides of. So, voila: they invented a Möbius strip keyboard.

"'I want to use the back of the keyboard as well as the front!'," Google Japan writes, in one translation, of the problem it aimed to solve. "In response to the voice of such users, I made a keyboard that has no front or back. A unique keyboard with two sides. Gboard double-sided version."

“If you turn the keyboard upside down, you can’t type at all. After racking our brains trying to find a solution to this major problem, we came up with this keyboard,” Google Japan noted in another translation of the blog post.

They call it the Gboard double-sided, aka the “Infinity Keyboard.” (I’m kind of disappointed they call a Möbius strip “double-sided,” but I’ll blame the marketing people, not the product people).

"The endless structure has no front or back," Google Japan claims of its design. "You can type at any angle. If you put the Gboard double-sided version [somewhere], suddenly a circle of people will form there. If we used it together, smooth 'teamringWorkin.' You'll come up with some original ideas.” 

One can certainly hope so.

Now, that's teamwork. Credit: Google Japan

The Infinity Keyboard has some 208 mechanical keys, able to be accessed at any angle and from both “sides.” They are laid out in ortho-linear 26x8 layout, with per key RBG lighting (ergo, using as a Christmas wreath is one application the developers mention). The keys are hot-swappable, allowing users to easily customize the array. With all those keys, users can have keys specifically for typing, gaming, and coding, as well as in other languages. Sadly, it isn’t wireless, using a USB-C connection.  

Google Japan estimates it weighs “20.8 donuts,” which Fast Company figures is about 2.2 pounds (based on the weight of a Krispy Kreme Original Glazed).  

Google Japan shows a number of uses for the keyboard, including simultaneous use by several people but also to wear as a bracelet or as the aforementioned Christmas wreath. And, they point out, it would be great in weightless conditions, since it has no top or bottom.



It turns out that Google Japan has been releasing unique keyboard designs each year on October 1, because – I had never counted -- 10/1 = 101 = number of keys on a typical keyboard.  Previous efforts include the Gboard Bar, which has all the keys laid out horizontally (some 5 feet long!), the Gboard Bending Spoon, which allows users to input by – you guessed it – bending a spoon, and Gboard Caps, a wearable keyboard in the (rough) shape of a cap.

Although each of these keyboards exist and are functional, Google has no plans to commercialize them. They’re intended to engender some smiles, and, perhaps, spur some creative thinking. However, Google has made the schematics and firmware open source on GitHub, with 3D printing STL files. You can make one yourself and see what it can do for your creativity/productivity.

Have at it. Credit: Google Japan
Or, if you’re not that technically oriented, they have a PDF that lets you make a paper version just to get a sense of it.  

Marcus Mears III, reviewing the Infinity Keyboard in TechRadar, says: “I love seeing these bizarre keyboard designs pop up…It's this type of ingenuity and playful creation that we need to keep advancing in the world of computer peripherals - where would we be if we never moved on from trackballs and beige membrane keyboards? Certainly not at the Gboard Double-Sided Version.” 

Jesus Diaz, in Fast Company, goes further in his praise: “If anything, this ongoing keyboard joke shows that there’s nobody in the world like the Japanese to create the quirkiest, most fun designs on the planet.” He adds: “Nobody else can compete with their imagination, but here I humbly submit, Google Japan, two final words for the next Gboard: hula hoop.”

I look forward to seeing what they come up with next October.

---------

We live in a world that, for the most part, has never advanced from the QWERTY keyboard design, which, as you may recall, was originally intended to slow typists down so they wouldn’t jam the typewriter keys. Obviously, it’s been a long time since that’s been our big problem, yet we’ve gotten so used to that layout that we’re still using it. So if it takes a double sided, Infinity keyboard, Möbius strip keyboard to jar our thinking about keyboards (or anything else), I say: good work, Google Japan!

Much as I love the concept, I have to admit that I’ll probably never use a Gboard double sided keyboard, and I’m certainly not going to attempt to build one. But I love that the design team at Google Japan thought of it, and I hope others are inspired to build their own, to play around with it, and to see what new ideas it might spark.

I’ve written before about people trying to break traditional design paradigms – e.g., umbrellas or even the wheel. We get so used to doing things in a particular way using existing designs that we often don’t remember that, hey, other designs are possible, and some of those designs may open up not only new ways of doing the things we’re doing but also help us identify new things to do. Design should be an enabler, not a constraint.

Their video talks about wanting “a keyboard with a twist, one that turns the problem space outside-in.” That’s what design should be helping us do.

Sunday, October 6, 2024

You're Not Going to Automate MY Job

Last week U.S. dockworkers struck, for the first time in decades. Their union, the International Longshoremen’s Association (ILW), was demanding a 77% pay increase, rejecting an offer of a 50% pay increase from the shipping companies. People worried about the impact on the economy, how it might impact the upcoming election, even if Christmas would be ruined. Some panic hoarding ensued.

Then, just three days later, the strike was over, with an agreement for a 60% wage increase over six years. Work resumed. Everyone’s happy right? Well, no. The agreement is only a truce until January 15, 2025. While money was certainly an issue – it always is – the real issue is automation, and the two sides are far apart on that.

Fighting automation isn't going to work

Most of us aren’t dockworkers, of course, but their union’s attitude towards automation has lessons for our jobs nonetheless.

The advent of shipping containers in the 1960’s (if you haven’t read The BoxHow the Shipping Container Made the World Smaller and the World Economy Bigger, by Marc Levinson, I highly recommend it) made increased use of automation in the shipping industry not only possible but inevitable. The ports, the shipping companies, and the unions all knew this, and have been fighting about it ever since. Add better robots and, now, AI to the mix, and one wonders when the whole process will be automated.

This is the world of shipping today. Credit: Logistics Management
Curiously, the U.S. is not a leader in this automation. Margaret Kidd, program director and associate professor of supply chain logistics at the University of Houston, told The Hill: “What most Americans don’t realize is that American exceptionalism does not exist in our port system. Our infrastructure is antiquated. Our use of automation and technology is antiquated.”

Eric Boehm of Reason agrees:

The problem is that American ports need more automation just to catch up with what's considered normal in the rest of the world. For example, automated cranes in use at the port of Rotterdam in the Netherlands since the 1990s are 80 percent faster than the human-operated cranes used at the port in Oakland, California, according to an estimate by one trade publication.

The top rated U.S. port in the World Bank’s annual performance index is only 53rd.  Sixty-two ports worldwide – out of some 1300 – are considered semi- or fully automated. According to Heather Long in WaPo, the U.S. has 3 ports that are considered fully automated and another three that are considered semi-automated.  Loading and unloading times in the U.S. are longer than competing ports. Increased use of automation, in some fashion and to some degree, is necessary to stay competitive.

Yet the dockworkers are unmoved. In a letter to members, the ILW leader vowed: “Let me be clear: we don’t want any form of semi-automation or full automation. We want our jobs—the jobs we have historically done for over 132 years.” He insists the new six-year contract must include “absolute airtight language that there will be no automation or semiautomation” 

“The rest of the world is looking down on us because we’re fighting automation,” said Dennis Daggett, executive vice president of the ILA. “Remember that this industry, this union has always adapted to innovation. But we will never adapt to robots taking our jobs.”

This is what needs to get resolved by January. Wages are important, but only for those who have jobs. It very much reminds me of last year’s Hollywood writer’s strike, which was partly about money, but also about not letting studios use generative AI to do their jobs.

Seem familiar? Credit: Mandalit del Barco/NPR News
It’s worth pointing out that dockworkers may not quite fit the typical blue collar union worker stereotype. The Wall Street Journal reports that the average, full-time dockworkers on the West Coast made $233,000, while more than half of their East Coast counterparts earned over $150,000. Not all dockworkers earn such amounts, nor has full-time work available, but – still.  

Resisting automation is a great rallying cry to union members, but is not realistic. “The argument to stop automation now is slamming the barn door decades after the horse has gotten out. This is not going to work long term. The economic incentives behind it are too strong,” Harley Shaiken, a professor emeritus at the University of California at Berkeley, told The Washington Post.

Mr. Levinson told WaPo: “In the past, the longshore unions have agreed to various types of automation, but there’s always been some kind of price attached in terms of protecting the jobs and protecting the union’s jurisdiction. And I assume that there is some price at which this dispute will be resolved.”

Professor Kidd, in The Hill, urged: “The ILA needs to be looking at a long-term vision. There’s no industry — journalism, academia, manufacturing — that hasn’t been changed by technology,”

Along those lines, Erik Brynjolfsson, the director of Standford University’s Digital Economy Lab, suggested to The Hill:

I find it very short-sighted of the dockworkers, or any workers, to be pushing against automation if you can instead, find a way that the gains get shared. I would hope that there’s an opportunity there to strike an agreement where there is a lot more automation, not less automation and that some of the benefits get shared with the dockworkers and others.

This is not just a dockworker’s issue. As Ms. Long wrote in WaPo, “the bigger reason everyone should pay attention is that this is an early battle of well-paid workers against advanced automation. There will be many more to come.” Or, as Allison Morrow quipped in CNN: “The bots come for all of us, which is why the outcome of the port strike is particularly important to watch.”

Maybe you’re not a longshoreman, or a Hollywood writer. But the future is coming for your job too. I was struck by the title of an NYT op-ed by Jonathan Reisman, M.D.: I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine. As Dr. Reisman concludes:

In the end, it doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about.

I think of another quote from Professor Brynjolfsson, from a WSJ article earlier this year: “This recognizes that tasks—not jobs, products, or skills—are the fundamental units of organizations.”  I.e., when it comes to thinking about the future of your job, you really need to be recognizing which tasks in it could be done as well or better by automation/AI. They’re going to be more than you might like.  

The future is here.

Monday, September 30, 2024

Someone (Else) Should Regulate AI

There’s some good news/bad news about AI regulation. The good news is that this past weekend California Governor Gavin Newsome vetoed the controversial S.B. 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bad news is that he vetoed S.B. 1047.

Or maybe it’s the other way around.

Regulating AI is tricky. Credit: NCSL

Honestly, I’m not sure how I should feel about the veto. Smarter, more knowledgeable people than me had lined up on both sides. No legislation is ever perfect, of course, and it’s never possible to fully anticipate the consequence of most new laws, but a variety of polls indicate that most Americans support some regulation of AI.

“American voters are saying loud and clear that they don’t want to see AI fall into the wrong hands and expect tech companies to be responsible for what their products create,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Voters are concerned about AI advancement—but not about the U.S falling behind China; they are concerned about how powerful it can become, how quickly it can do so and how many people have access to it.”

Credit: AIPI

S.B. 1047 would have, among other things, required safety testing of large AI models before their public release, given the state the right to sue AI companies for damages caused by their AI, and mandated a “kill switch” in case of catastrophic outcomes. Critics claimed it was too vague, only applied to large models, and, of course, would stifle innovation.

In his statement explaining his veto, Governor Newsome pointed out the unequal treatment of the largest models and “smaller, specialized” models, while stressing that action is needed and that California should lead the way. He pointed out that California has already taken some action on AI, such as for deepfakes, and punted the issue back to the legislature, while promising to work with AI experts on improved legislation/regulation.

The bill’s author, Senator Scott Wiener, expressed his disappointment: “This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Moreover, he added: “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.”

Indeed, as on most tech issues, Congress has been largely missing in action. “States and local governments are trying to step in and address the obvious harms of A.I. technology, and it’s sad the federal government is stumped in regulating it,” Patrick Hall, an assistant professor of information systems at Georgetown University, told The New York Times. “The American public has become a giant experimental population for the largest and richest companies in world.”

I don’t know why we’d expect any more from Congress; it’s never gotten its hands around the harms caused by Facebook, Twitter, or Instagram, and the only reason it took any action against TikTok was because of its Chinese parent company. It may take Chinese AI threatening American for Congress to act.

As was true with privacy, the European Union was quicker to take action, agreeing on regulationthe A.I. Act – last year, after debating it some three years. That being said, the Act won’t be in effect until August 2025, and the details are still being drafted. Meanwhile, big tech companies – mostly American – are working to weaken it.  

So it goes.

Summary of EU AI Act Credit: Analytics Creator
In the absence of new legislation, not all is lost. For example, Owen J. Daniels and Jack Corrigan, writing in FedScoop, outline three approaches regulators should be taking:

First, agencies must begin to understand the landscape of AI risks and harm in their regulatory jurisdictions. Collecting data on AI incidents — where AI has unintentionally or maliciously harmed individuals, property, critical infrastructure, or other entities — would be a good starting point.

Second, agencies must prepare their workforces to capitalize on AI and recognize its strengths and weaknesses. Developing AI literacy among senior leaders and staff can help improve understanding and more measured assessments of where AI can appropriately serve as a useful tool.

Third and finally, agencies must develop smart, agile approaches to public-private cooperation. Private companies are valuable sources of knowledge and expertise in AI, and can help agencies understand the latest, cutting-edge advancements. Corporate expertise may help regulators overcome knowledge deficiencies in the short term and develop regulations that allow the private sector to innovate quickly within safe bounds.

Similarly, Matt Keating and Malcolm Harkins, writing in CyberScoop, warn: “Most existing tech stacks are not equipped for AI security, nor do current compliance programs sufficiently address AI models or procurement processes. In short, traditional cybersecurity practices will need to be revisited and refreshed.” They urge that AI developers build with security best practices in mind, and that organizations using AI “should adopt and utilize a collection of controls, ranging from AI risk and vulnerability assessments to red-teaming AI models, to help identify, characterize, and measure risk.” 

In the absence of state or federal legislation, we can’t just throw our hands up and do nothing. AI is evolving much too fast.

-----------

There are some things that I’d hope we can agree on. For example, our images, voices, and other personal characteristics shouldn’t be allowed to be used/altered by AI. We should know what information is original and what is AI-generated/altered. AI shouldn’t be used to ferret out even more of our personal information. We should be careful about to whom we sell/license it to, and we should be hardening all of our technology against the AI-driven cyberattacks that will, inevitably, come. We need to determine who is responsible, how, for which harms.

And we need to have a serious discussion about who benefits from AI. If AI is used to make a handful of rich people even richer, while costing millions of people jobs, that is a societal problem that we cannot just ignore – and must not allow.

Regulating a new technology, especially a world-changing one like AI, is tricky. Do it too soon/too harsh, and it can deter innovation, especially while other jurisdictions don’t impose them. Do it too late/too lightly, and, well, you get social media.  

There’s something important we all can do. When voting this fall, and in every other election, we should be asking ourselves: is this candidate someone who understands the potentials and perils of AI and is prepared to get us ready, or is it someone who will just try to ignore them?

Monday, September 23, 2024

Red Alert About Red Buttons

In a week where, say, the iconic brand Tupperware declared bankruptcy and University of Michigan researchers unveiled a squid-inspired screen that doesn’t use electronics, the most startling stories have been about, of all things, pagers and walkie-talkies.

Pushing that red button probably isn't going to be good. Credit: Bing Image Creator

Now, most of us don’t think much about either pagers or walkie-talkies these days, and when we do, we definitely don’t think about them exploding. But that’s what happened in Lebanon this week, in ones carried by members of Hezbollah. Scores of people were killed and thousands injured, many of them innocent bystanders. The suspicion, not officially confirmed, is that Israel engineered the explosions.

I don’t want to get into a discussion about the Middle East quagmire, and I condemn the killing of innocent civilians on either side, but what I can’t get my mind around is the tradecraft of the whole thing. This was not a casual weekend cyberattack by some guys sitting in their basements; this was a years-in-the-making, deeply embedded, carefully planned move.

A former Israeli intelligence official told WaPo that, first, intelligence agencies had to determine “what Hezbollah needs, what are its gaps, which shell companies it works with, where they are, who are the contacts,” then “you need to create an infrastructure of companies, in which one sells to another who sells to another.”  It’s not clear, for example, if Israel someone planted the devices during the manufacturing process or during the shipping, or, indeed, if its shell companies actually were the manufacturer or shipping company.  

Either way, this is some James Bond kind of shit.

Exploded pager. Credit: AFP
The Washington Post reports that this is what Israeli officials call a “red-button” capability, “meaning a potentially devastating penetration of an adversary that can remain dormant for months if not years before being activated.” One has to wonder what other red buttons are out there.

Many have attributed the attacks to Israel’s Unit 8200, which is roughly equivalent to the NSA.  An article in Reuters described the unit as “famous for a work culture that emphasizes out-of-the-box thinking to tackle issues previously not encountered or imagined.”  Making pagers explode upon command certainly falls in that category.

If you’re thinking, well, I don’t carry either a pager or a walkie-talkie, and, in any event, I’m not a member of Hezbollah, don’t be so quick to think you are off the hook. If you use a device that is connected to the internet – be it a phone, a TV, a car, even a toaster – you might want to be wondering if it comes with a red button. And who might be in control of that button.

Just today, for example, the Biden Administration proposed a ban on Chinese software used in cars. “Cars today have cameras, microphones, GPS tracking and other technologies connected to the internet. It doesn’t take much imagination to understand how a foreign adversary with access to this information could pose a serious risk to both our national security and the privacy of U.S. citizens,” said Commerce Secretary Gina Raimondo. “In an extreme situation, foreign adversaries could shut down or take control of all their vehicles operating in the United States all at the same time.

“The precedent is significant, and I think it just reflects the complexities of a world where a lot of connected devices can be weaponized,” Brad Setser, a senior fellow at the Council on Foreign Relations, told The New York Times.  In a Wall Street Journal op-ed, Mike Gallaher, head of defense for Palantir Technologies, wrote: “Anyone with control over a portion of the technology stack such as semiconductors, cellular modules, or hardware devices, can use it to snoop, incapacitate or kill.”

Similarly, Bruce Schneier, a security technologist, warned: “Our international supply chains for computerized equipment leave us vulnerable. And we have no good means to defend ourselves…The targets won’t be just terrorists. Our computers are vulnerable, and increasingly so are our cars, our refrigerators, our home thermostats and many other useful things in our orbits. Targets are everywhere.”

If all this seems far-fetched, last week the FBI, NSA, and the Cyber National Mission Force (CNMF) issued a Joint Cybersecurity Advisory detailing how the FBI had just taken control of a botnet of 260,000 devices. “The Justice Department is zeroing in on the Chinese government backed hacking groups that target the devices of innocent Americans and pose a serious threat to our national security,” said Attorney General Merrick B. Garland. The hacking group is called Flax Typhoon, working for a company called Integrity Technology Group, which is believed to be controlled by the Chinese government.

Ars Technica described the network as a “sophisticated, multi-tier structure that allows the botnet to operate at a massive scale.” It is the second such botnet taken down this year, and one has to wonder how many others remain active. Neither of these attacks were believed to be preparing anything to explode, being more focused on surveillance, but their malware impacts could certainly cause economic or physical damage.

Unit 8200, meet Flax Typhoon.

Sophisticated? Yeah. Credit: Black Lotus Labs

Earlier this year Microsoft said Flax Typhoon had infiltrated dozens of organizations in Taiwan, targeting “government agencies and education, critical manufacturing, and information technology organizations in Taiwan.” Red buttons abound.

--------------

Ian Bogost, a contributing writer for The Atlantic, tried to be reassuring, saying that your smartphone “almost surely” wasn’t going to just explode one day. “In theory,” Professor Bogost writes, “someone could interfere with such a device, either during manufacture or afterward. But they would have to go to great effort to do so, especially at large scale. Of course, this same risk applies not just to gadgets but to any manufactured good.”

The trouble is, there are such people willing to go to such great effort, at large scale.

We live in a connected world, and it is growing evermore connected. That has been, for the most part, a blessing, but we need to recognize that it can also be a curse, in a very real, very physical way.

If you thought pagers exploding was scary, wait until self-driving cars start crashing on purpose. Wait until your TVs or laptops start exploding. Or wait until the nanobots inside you that you thought were helping you suddenly start wreaking havoc instead.

If you think the current red button capabilities are scary, wait until they are created – and controlled – by AI.

Monday, September 16, 2024

Oh, Give Me a Home...Please!

It’s way too expensive. There’s often not enough of it where/when needed. Too much of it is of substandard quality. It remains rooted in outdated standards and practices. It is hyper-local. Private equity firms have taken a big interest, driving up prices. Most significantly, its presence or absence has a huge impact on people’s quality of life.

I must be talking about health care, right? No -- housing.

3D printed homes in Austin. Credit: Icon/Twitter

America is in the midst of a housing crisis. Home prices have surged 54% since 2019, and 5.8% in the past year. The National Association of Realtors reports that the median price for an existing single family home is $422,000. A Washington Post analysis indicates that rents have gone up by 19% since 2019. Although increases have cooled lately, Harvard’s Joint Center for Housing Studies says that half of renters spend more than 30% of income on rent, and a quarter spend more than 50%.

Credit: Washington Post
Meanwhile, we’re not building nearly enough homes. Zillow says we’re 4.5 million homes short, while other estimates put the number as high as 7 million. And when builders do build new houses, they’re not focusing on so-called starter homes. Between increases in land and materials, and more prescriptive local regulations, the economics don’t work. “You’ve basically regulated me out of anything remotely on the affordable side,” Justin Wood, the owner of Fish Construction NW, told The New York Times.

New research from the University of Kansas takes a contrarian view: most metropolitan areas have plenty of housing; it’s just that not enough of it is affordable to low income households. “Our nation’s affordability problems result more from low incomes confronting high housing prices rather than from housing shortages,” co-author Kirk McClure said. “This condition suggests that we cannot build our way to housing affordability.”

Whichever side is right, keep in mind that 60% of Gen Z worry they might never be able to afford a home, and 52% of Gen Z renters have struggled to pay their rent. Some 6.7 million households live in substandard homes, “with multiple structural deficiencies or lacking basic features such as electricity, plumbing, or heat.” And, of course, the U.S. has an estimated 653,000 homeless people at any given time.

Improving the housing situation is something that both Presidential candidates agree on, although their solutions differ. Former President Trump believes illegal immigrants are driving up housing costs, so stopping the influx and perhaps deporting millions of them will cause prices to go down. He would also “eliminate costly regulations, and free up appropriate portions of federal land for housing,” according to a spokesperson.

Vice President Harris, on the other hand, wants to build 3 million new units, give first time buyers $25,000, give more tax credits, and “expand rental assistance for Americans including for veterans, boost housing supply for those without homes, enforce fair housing laws, and make sure corporate landlords can’t use taxpayer dollars to unfairly rip off renters.”

They’re talking more about housing than health care.

A 2023 Pew survey found that Americans are broadly supportive of policies to increase housing supply, such as allowing apartments to be built in more areas or making permit decisions faster. They are most keen on such changes in what are now largely commercial areas, rather than in the residential areas they might live in.

That’s NIMBY: Not in My Back Yard. People fear that allowing lower cost homes or multi-family units in their neighborhood might decrease their own home’s value.  NIMBY, of course, is much broader than just housing. We want more manufacturing, but not near us. We need power plants, water filtration centers, and solid waste landfills, but somewhere else. We need places to raise all those cows, pigs, and chickens we eat, not to mention the plants that process them, but, good heavens, the smell! The mess! And, please, please, don’t make us live near poor people.

There is now a countervailing movement, at least for housing: YIMBY. “I could not be more thrilled that every top Democrat in America is becoming a Yimby!” Laura Foote, the executive director of the national Yimby Action group, said on a recent Harris fundraising call. “We have officially made zoning and permitting reform cool! I just want everyone to take that in.”

“What we’re seeing is a generational shift,” Sen. Brian Schatz (D., Hawaii) also said on the call. “If we want to actually solve the problem of the housing shortage, the simplest way is to make it permissible to build.” 

The problem is that federal officials can talk all they want about zoning and permitting reform and easing the permitting process, but that zoning and permitting happens at the local level. As Jerusalem Demsas explains in The Atlantic, California started trying to make it easier to build accessary dwelling units (ADUs) – think mother-in-law suites – back in 1982, but only recently, and after additional legislation, has there been much progress. “Cities are openly flaunting state law to prohibit home building,” says Matthew Lewis, communications director at California YIMBY.

Edward L. Glaeser, a Harvard economic professor, offers a potential solution in a New York Times op-ed: threaten to cut off federal funding if states don’t move “to reduce the ability of communities to zone out change,” much as they forced states to raise their drinking age in 1984 or face loss of highway funds. Moral persuasion doesn’t seem to be working.

Doing nothing is not an option. As David Dworkin, president and CEO of the nonprofit National Housing Conference, told Adele Peters of Fast Company:

West Coast cities have struggled with housing affordability, but now we’re seeing these kinds of problems in Boise, Idaho; Little Rock, Arkansas and Charlotte, North Carolina. And that’s really a game changer. The bottom line is if you don’t want affordable housing in your backyard, you’re going to end up with homeless people in your front yard. And you don’t have to go far today to see what that looks like.

Look, we’re still building homes like it was 1924, not 2024. Where are our armies of robots building them in a day or two? Why hasn’t 3D printing of houses taken off faster and cheaper (as a 100 unit development in Texas has shown to be feasible)? With the current commercial real estate glut, converting those buildings to residential is a win/win. We can do better.

A recent editorial in The Lancet called housing ”an overlooked social determinant of health,” and concluded: “Making housing a priority public health intervention not only presents a pivotal opportunity, but a moral imperative. The health of our communities depends on it.”

So, yeah: YIMBY.

Monday, September 9, 2024

We Should Learn to Have More Fun (or Vice Versa)

For several years now, my North Star for thinking about innovation has been Steven Johnson’s great quote (in his delightful Wonderland: How Play Made the Modern World): “You will find the future where people are having the most fun.” No, no, no, naysayers argue, inventing the future is serious business, and certainly fun is not the point of business.  Maybe they’re right, but I’m happier hoping for a future guided by a sense of fun than by one guided by P&Ls.

Playing games - and having fun - is important business. Credit: Bing Image Creator

Well, I think I may have found an equally insightful point of view about fun, espoused by game designer Raph Koster in his 2004 book A Theory of Fun for Game Design: “Fun is just another word for learning.”

Wow.

That’s not how most of us think about learning. Learning is hard, learning is going to school, learning is taking tests, learning is something you have to do when you’re not having fun. So “fun is just another word for learning” is quite a different perspective – and one I’m very much attracted to.

I regret that it took me twenty years to discover Mr. Koster’s insight. I read it in a more current book: Kelly Clancy’s Playing With Reality: How Games Have Shaped Our World. Dr. Clancy is not a game designer; she is a neuroscientist and physicist, but she is all about play. Her book looks at games and game theory, especially how the latter has been misunderstood/misused.



We usually think of play as a waste of time, as something inherently unserious and unimportant, when, in fact, it is how our brains have evolved to learn. The problem is, we’ve turned learning into education, education into a requirement, teaching into a profession, and fun into something entirely separate. We’ve gotten it backwards.

“Play is a tool the brain uses to generate data on which to train itself, a way of building better models of the world to make better predictions,” she writes. “Games are more than an invention; they are an instinct.”  Indeed, she asserts: “Play is to intelligence as mutation is to evolution.”

Mr. Koster’s fuller quote about fun and learning is on target with this:

That’s what games are, in the end. Teachers. Fun is just another word for learning. Games teach you how aspects of reality work, how to understand yourself, how to understand the actions of others, and how to imagine.

We don’t look at our teachers as a source of fun (and many students barely look at them as a source of learning). We don’t look at schools as a place for games, except on the playground, and then only for the youngest students. We drive students to boredom, and, as Mr. Koster says, “boredom is the opposite of learning” (although, ironically, boredom may be important to creativity).  

Learning is actually fun, especially from a physiological standpoint. “Interestingly, learning itself is rewarding to the brain,” Dr. Clancy points out. “Researchers have found that the “Aha1” moment of insight in solving a puzzle triggers dopamine release in the same way sugar or money can.” We love learning; our brains are hardwired to reward us when we figure something new out. Play is a crucial way we get to that; as Dr. Clancy writes: “Play is all about the unknown and learning how to navigate it.”

Dr. Clancy is not the first to articulate this point of view. Almost 90 years ago Dutch historian Johan Huizinga wrote Homo Ludens: A Study of the Play-Element in Culture. Dr. Clancy summaries his point: “Play, historian Johan Huizinga argues in his classic book Homo Ludens, is how humans innovate, from new tools to new social contracts….Huizinga sees games as foundational cultural technology: Civilization arises and unfolds in and as play.”



I am wowed by the assertion that play is how humans innovate. If that seems extreme to you, contrast the crazy, reckless, boisterous atmosphere of many start-ups with the atmosphere of most corporate innovation departments. Not much playing – not much fun! – going on in the latter, I suspect.

Dr. Clancy goes one very interesting step further: “Play has served as a crucible of culture and innovation; it’s at the heart of design itself…Design is what happens when we uncover rules latent in the world and use these to define the logic of a new, separate system.”

That’s not how most of us typically think about design, but how I hope more of us will.   

And if you want to bring up the trend towards the gamification of everything, don’t get Dr. Clancy started: “Gamification, in other words, replaces what people actually want with what corporations want,” and “Many jobs that can easily be gamified will more profitably be automated.” You need more than gamifying to make play.

All this focus on the importance of play and having fun reminds me of the classic essay A Mathematician’s Lament, by Paul Lockhart. In it, he argues that when people say they are just bad at math, what they really are saying is that they’ve been taught math badly. “Math is not about following directions,” he wrote. “It’s about making new directions.” I.e., playing. 

Imagine, he suggests, if music was taught by simply teaching students how to transcribe notes, or art by having students identify colors. The students never get to hear music or to see art, much less to create either on their own. They’d hate both and claim to be bad at them. That, he charges, is what has happened with teaching math.  We’ve drained all the fun out of it, taken all the discovery from it.

“What a sad endless cycle of innocent teachers inflicting damage upon innocent students,” Professor Lockhart laments in closing. “We could all be having so much more fun.”

We should.

We’re living in very serious times. If it’s not climate change, it’s microplastics. If it’s not the threat of nuclear war, it’s of biochemical attacks. If it’s not the danger of cyberattacks, it’s of AI. If it’s not the impact of social media, it’s the breakdown of civility. Pick your poison; honestly, it’s hard to keep up with the things we should be worrying about. Fun seems pretty far down our priority list.

Fun is just another word for learning?  Play is at the heart of design? Play is how humans innovate? These are radical concepts in our troubled times, but ones that we should take more seriously -- or, perhaps, more mischievously.

Monday, September 2, 2024

Biohybrid Bots Are Mushrooming

I hadn’t expected to write about a biology-related topic anytime soon after doing so last week, but, gosh darn it, then I saw a press release from Cornell about biohybrid robots – powered by mushrooms (aka fungi)! They had me at “biohybrid.”  

A mushroom powered robot. Credit: Cornell University

The release talks about a new paper -- Sensorimotor Control of Robots Mediated by Electrophysiological Measurements of Fungal Mycelia – from the Cornell’s Organic Robotics Lab, led by Professor Rob Shepherd. As the release describes the work:

By harnessing mycelia’s innate electrical signals, the researchers discovered a new way of controlling “biohybrid” robots that can potentially react to their environment better than their purely synthetic counterparts.

Or, in the researchers’ own words:

The paper highlights two key innovations: first, a vibration- and electromagnetic interference–shielded mycelium electrical interface that allows for stable, long-term electrophysiological bioelectric recordings during untethered, mobile operation; second, a control architecture for robots inspired by neural central pattern generators, incorporating rhythmic patterns of positive and negative spikes from the living mycelia.

Let’s simplify that: “This paper is the first of many that will use the fungal kingdom to provide environmental sensing and command signals to robots to improve their levels of autonomy,” Professor Shepherd said. “By growing mycelium into the electronics of a robot, we were able to allow the biohybrid machine to sense and respond to the environment.”

Lead author Anand Mishra, a research associate in the lab, explained: “If you think about a synthetic system – let’s say, any passive sensor – we just use it for one purpose. But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals. That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”

The team build two robots: a soft one shaped like a spider, and a wheeled one. The researchers first used the natural spike in the mycelia to make them walk and roll, respectively, using the natural signals from the mycelia. Then researchers exposed them to ultraviolet light, which caused the mycelia to react and changed the robots’ gaits. Finally, the researchers were able to override the mycelia signals entirely.

“This kind of project is not just about controlling a robot,” Dr. Mishra said. “It is also about creating a true connection with the living system. Because once you hear the signal, you also understand what’s going on. Maybe that signal is coming from some kind of stresses. So you’re seeing the physical response, because those signals we can’t visualize, but the robot is making a visualization.”

Dr. Shepherd believes that instead of using light as the signal, they will use chemical signals. For example: “The potential for future robots could be to sense soil chemistry in row crops and decide when to add more fertilizer, for example, perhaps mitigating downstream effects of agriculture like harmful algal blooms.”

It turns out that biohybrid robots in general and fungal computing in particular are a thing. In last week’s article I quoted Professor Andrew Adamatzky, of the University of the West of England about his preference for fungal computing. He not only is the Professor in Unconventional Computing there, and is the founder and Editor-in-Chief of the International Journal for Unconventional Computing, but also literally wrote the book about fungal computing.  He’s been working on fungal computing since 2018 (and before that on slime mold computing).

Professor Adamatzky notes that fungi have a wide array of sensory inputs: “They sense light, chemicals, gases, gravity, and electric fields,” which opens the door to a wide variety of inputs (and outputs). Accordingly, Ugnius Bajarunas, a member of Professor Adamatzy’s team, told an audience last year: “Our goal is real-time dialog between natural and artificial systems.”

With fungal computing, TechHQ predicts: “The future of computing could turn out to be one where we care for our devices in a way that’s closer to looking after a houseplant than it is to plugging in and switching on a laptop.”

But how would we reboot them?

There are some who feel that we’re making progress on biohybrid robotics faster than we’re thinking about the ethics of it. A paper earlier this summer -- Ethics and responsibility in biohybrid robotics researchurged that we quickly develop and ethical framework, and potentially regulation.

The authors state: “While the ethical dilemmas associated with biohybrid robotics resonate with challenges seen in fields like biomedicine, conventional robotics, or artificial intelligence, the unique amalgamation of living and nonliving components in biohybrid robots, also called biorobots, breeds its own set of ethical complexities that warrant a tailored investigation.”

Co-lead author Dr. Rafael Mestre, from the University of Southampton, said: "But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers."  His co-lead author Aníbal M. Astobiza, an ethicist from the University of the Basque Country, elaborated:

Bio-hybrid robots create unique ethical dilemmas. The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.

Dr. Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, added: “Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant.”

Big Think recently focused on the topic, asking: Revolutionary biohybrid robots are coming. Are we prepared? The article points out: “Now, scientific advances have increasingly shown that biological beings aren’t just born; they can be built.” It notes: “Biohybrid robots take advantage of living systems’ millions of years of evolution to grant robots benefits such as self-healing, greater adaptability, and superior sensor resolution. But are we ready for a brave new world where blending the artificial and the biological blurs the line between life and non-life?”

Probably not. As Dr. Mestre and his colleagues concluded: “If debates around embryonic stem cells, human cloning, or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.”

Biohybrid robotics and fungal computing are emerging fast.

Think you know what robots are? You don’t. Think you understand how computing works? Maybe silicon-based, but probably not “unconventional.” Think you’re ready for artificial intelligence? Fungi-powered AI might still surprise you.  

Exciting times indeed.