Tuesday, January 29, 2019

We Don't Need No Stinking Batteries

Quick: how many different power cords and chargers do you have for your various devices?

E.g., for your PC, smartphone, tablet, e-reader, or smartwatch.  And how much time do you spend actually charging them, or looking for somewhere to charge them?  It's likely that the answers are well in the more-than-I'd-like range. 

All that may be changing, due to something called rectennas.  More importantly, they may be what truly make the Internet-of-Things (IoT) possible.
Rectenna image.  Credit: Christine Danilo, MIT
Healthcare is quite enamoured about the possibilities of IoT -- not just wearables but smart pills, ingestibles, even tiny robots swimming around our bodies fixing problems.  We'd be able to track and address in real-time or near-real time what is happening with our bodies.  It is truly exciting.

For example, these elastic robots can actually change their shape based on their surroundings:

The researchers who developed them tested them specifically to mimic what it would be like to navigate through blood vessels with varying circumference and viscosity.  The lead author, Selman Sakar, noted: "if they encounter a change in viscosity or osmotic concentration, they modify their shape to maintain their speed and maneuverability without losing control of the direction of motion."

Pretty impressive.

The problem with many IoT devices, though, is similar to with our other devices: what happens when the battery runs low?  It's not easy to get a charger into our gut to repower smart pills, and for anyone worried about the ecological risks posed by computer or smartphone batteries, well, imagine tiny versions of those toxic batteries floating around in your body.

We'll need sensors to track the damage done by our other sensors' dead batteries.

Not with rectennas.  Rectennas are powered by Wi-Fi signals, like the kind you use in your house or at Starbucks to get internet access.  This is not a new concept, but what is new is that MIT researchers have been able to harvest enough power to make them useful, in a device only few atoms thick.

The research -- with the catchy title Two-dimensional MoS2-enabled flexible rectenna for Wi-Fi-band wireless energy harvesting -- appeared in Nature.  MIT Professor and paper co-author Tomás Palacios summed up its importance as follows:
What if we could develop electronic systems that we wrap around a bridge or cover an entire highway, or the walls of our office and bring electronic intelligence to everything around us? How do you provide energy for those electronics? We have come up with a new way to power the electronics systems of the future — by harvesting Wi-Fi energy in a way that’s easily integrated in large areas — to bring intelligence to every object around us.
Rectennas catch AC electromagnetic waves, including Wi-Fi or Bluetooth, and transmit them to a "two-dimensional" semi-conductor that transforms them into DC.  Previous versions of rectennas had either been too rigid to be cost-effective, or, if flexible, couldn't catch/convert enough power to make them useful.  These new rectennas are cheaper, faster, and more flexible than earlier rectennas.

Credit: ExtremeTech
They can convert 10 GHz of wireless signals with 30% efficiency, and produce 40 microwatts of power.  The 40 microwatts is plenty to power small devices, including wearables or certain medical devices.

The accompanying MIT press release specifically mentioned implantable medical devices and smart pills as promising early applications.  Another study coauthor, Jesús Grajal, a researcher at the Technical University of Madrid, pointed out a key advantage of rectennas: 
Ideally you don’t want to use batteries to power these systems, because if they leak lithium, the patient could die.  It is much better to harvest energy from the environment to power up these small labs inside the body and communicate data to external computers. 
Yes, you'd have to say that would be much better.

For any engineering nerds out there, first author Xu Zhang described their work as follows: “By engineering MoS2 into a 2-D semiconducting-metallic phase junction, we built an atomically thin, ultrafast Schottky diode that simultaneously minimizes the series resistance and parasitic capacitance."

I think we know who came up with that study title.

Credit: Kong et. al. , Advanced Materials Technologies
The trick now is finding the right applications.  For example, another set of MIT researchers have developed an "ingestible electronic pill" that releases medications only when necessary and includes sensors that can help monitor a patient's conditions, alerting physicians and/or varying dosage as warranted. 

The electronic pill currently uses a small silver oxide battery, but R&D Magazine noted that the researchers are "exploring using alternative power sources."  Perhaps they should walk down the hall and find out more about rectennas.

It is not clear how quickly these advances can be commercialized, but it can't be soon enough.  The healthcare IoT market is predicted to reach $323b by 2025, with a CAGR of 20.6% from 2017 to 2025.   Maybe a lot of that could use wireless charging, but that'd still require a lot of batteries and a lot of time charging.

Of course, power is only one of the major issues IoT will have to figure out; security is another.  Many current IoT devices are not very secure, nor are many current medical devices.  A coalition of healthcare organizations and medical device manufacturers are trying to catch up, recently issuing a Joint Security Plan of actions that need to be taken.  The actions are, unfortunately, only first steps and are voluntary.

The micro-robots may have to wait -- hopefully, not for too long.

Our mental model of mobile devices has always had to include, and be limited by, their batteries -- how big they need to be, how much power they can produce, and how long they can last.  We've done some impressive things within their constraints, but it will be very interesting to see what kinds of new devices and new uses we can come up with when we don't have to be constrained by them.

What can you imagine if you don't need batteries to power it?






Tuesday, January 22, 2019

Do Unto Robots As You...

It was very clever of The New York Times to feature two diametrically different perspectives on robots on the same day: Do You Take This Robot... and Why Do We Hurt Robots?   They help illustrate that, as robots become more like humans in their abilities and even appearance, we're capable of treating them just as well, and as badly, as we do each other. 

As fans of HBO's Westworld or Channel 4's Humans know, it's possible, perhaps even likely, that at some point the robots are going to realize this -- and they may be pissed about it. 

We're going to have robots in our healthcare system (Global Market Insights forecasts assistive healthcare robots could be a $1.2b market by 2024), in our workplaces, and in our homes.  Some of them will be unobtrusive, some we'll interact with frequently, and some we'll become close to.  How to treat them is something we're going to have to figure out.
Credit: IEEE Spectrum
Written by Alex Williams, Do You Take This Robot... focuses on people actually falling in love with (or at least prefering to be involved with) robots. Sex toys, even sex robots, have been around, but this takes it to a new level.  The term for it is "digisexual."

As Professor Neil McArthur, who studies such things, explained to Discover last year: 
We use the term ‘digisexuals’ to describe people who, mostly as a result of these more intense and immersive new technologies, come to prefer sexual experiences that use them, who don’t necessarily feel the need to involve a human partner, and who define their sexual identity in terms of their use of these technologies. 
Credit: Dilbert/Scott Adams
Apparently we're already in digisexuality's second wave, in which people take advantage of those immersive technologies -- VR/AR or AI-enable robots -- to form deeper relationships.  Professor Markie Twist, who co-wrote The Rise of Digital Sexuality with Professor McArthur in 2017, told Mr. Williams she has several patients in her clinical practice who qualify as digisexuals. 

Writer Emily Witt told Mr. Williams, "Digital sexuality allows for possibilities of anonymity, gender-bending, fetish play and other modes of experimentation with a degree of safety and autonomy that’s not present in the physical world," and Dr. Twist added: 
Research already shows that people can achieve orgasm with inanimate objects, and we already see how people have a longing for their tech devices, and feel separation anxiety when they are not around.  I think it’s easily possible that people might develop actual love for their technology. They already come up with affectionate names for their cars and boats."
And it's not just about sex.  There are a number of companion robots available or in the pipeline, such as:

  • Ubtech's Walker.  The company describes it as: "Walker is your agile smart companion—an intelligent, bipedal humanoid robot that aims to one day be an indispensable part of your family."  
  • Washington State University's more prosaically named Robot Activity Support System (RAS), aimed at helping people age in place.  
  • Toyota's T-HR3, part of Toyota's drive to put a robot in every home, which sounds like Bill Gates' 1980's vision for PCs.   One Toyota advisor said: "The idea is for the robot to be a friend."
  • Intuition Robot's "social robot" ElliQ.  The company's testing summed up users' reaction: "It’s clearly not just a device, but it’s clearly not a person.  They said it’s a new entity, a new creature, a presence, or a companion...They fully bought into ElliQ’s persona."
  • A number of cute robot pets., such as Zoetic's Kiki or Sony's Aibo.  


All that sounds very helpful, so why, as Jonah Engel Bromwich describes in Why Do We Hurt Robots?, do we have situations like: 
A hitchhiking robot was beheaded in Philadelphia. A security robot was punched to the ground in Silicon Valley. Another security bot, in San Francisco, was covered in a tarp and smeared with barbecue sauce...In a mall in Osaka, Japan, three boys beat a humanoid robot with all their strength. In Moscow, a man attacked a teaching robot named Alantim with a baseball bat, kicking it to the ground, while the robot pleaded for help.
One might understand a factory worker taking an opportunity to damage the robot which took his job, but what do the kids below have against the robot?

Cognitive psychologist Agnieszka Wykowska told Mr. Bromwich that we hurt robots in much the same way we hurt each other.  She noted: "So you probably very easily engage in this psychological mechanism of social ostracism because it’s an out-group member. That’s something to discuss: the dehumanization of robots even though they’re not humans."

As Mr. Bromwich concluded: "Paradoxically, our tendency to dehumanize robots comes from the instinct to anthropomorphize them."  In a previous article I discussed how easy it was to get people to treat robots like persons, and quoted researcher Nicole Kramer: "We are preprogrammed to react socially.  We have not yet learned to distinguish between human social cues and artificial entities who present social cues."

Get ready for it.  Sextech expert Bryony Cole told Mr. Williams: 
In the future, the term ‘digisexual’ will not be relevant.  Subsequent generations will have never known a distinction between their online and offline lives. They may grow up with sex education chatbots, make love to the universe in their own V.R.-created world, or meet their significant other through a hologram. This will be as normal as the sex education we had in schools using VHS tapes.
And you were worried about Fortnite.

Robots have already gotten married, been granted citizenship, and may be granted civil rights sooner than we expect.   If corporations can be "people," we better expect that robots will be as well.

We seem to think of robots as necessarily obeying Asimov's Three Laws of Robotics, designed to ensure that robots could cause no harm to humans, but we often forget that even in the Asimov universe in which the laws applied, humans weren't always "safe" from robots.  More importantly, that was a fictional universe.  

In our universe, though, self-driving cars can kill people, factory robots can spray people with bear repellent, and robots can learn to defend themselves.  So if we think we can treat robots however we like, we may find ourselves on the other end of that same treatment.  

Increasingly, our health is going to depend on how well robots (and other AI) treat us.  It would be nice (and, not to mention, in our best interests) if we could treat them at least considerately in return. 

Tuesday, January 15, 2019

On to the Next Big Thing

It's amusing to watch old movies where plot points often involved someone's inability to talk to the person they needed, in the pre-mobile phone era.  We take our smartphone's omnipresence and virtual omnipotence as a given in our daily lives, and treat even its temporary loss as a major inconvenience.

So why are people already wondering if the smartphone era is almost over?

Speculation on this is not new (voice has been touted as the next big platform for years), but intensified after Apple announced reduced revenue expectations earlier this year -- the first time in 16 years. It specifically cited slower iPhone sales in China and, even more jarring, said it would no longer break out unit sales of iPhones.

Its guidance may have more to do with China's slowing economy, Chinese competitors, or U.S. tariffs on smartphone imports than to anything about the smartphone era, but, as John D. Stoll pointed out in The Wall Street Journal, the iPhone has now been around for almost 12 years, and Apple is overdue for their next big product (the iPad was introduced in 2010). 

Mr. Stoll quotes McKinsey's Nick Santhanam: "Over time, every franchise dies.  You can innovate on an amazing mousetrap, but if people eventually don’t want a mousetrap, you’re screwed.”

Similarly, and also in the Journal, Timothy W. Martin and Sarah Krouse warn:
Today, it looks like the era of smartphone supremacy is starting to wane. The devices aren’t going away any time soon, but their grip on the consumer is weakening. A global sales slump and a lack of hit new advancements has underlined a painful reality for the matured industry: smartphones don’t look so singularly smart anymore.
They point to other "smart" options, including wearables, voice assistants, and connected vehicles.  Wayne Lam of IHS Markit told them: "We may even need another word for whatever the smartphone will become because when ‘smart’ is everywhere that term becomes almost meaningless."  

Jaede Tan, a director at App Annie, told Mr. Stoll: 
What’s not going to go away: the need to have a device that’s constantly with you, to remote control your life. At the moment, we call that the smartphone.  Does it become smaller, sit on your wrist, a chip in the back of your mouth? Maybe. The concept needs to remain constant.
I'll come back to that "chip in the back of your mouth."

There's growing consensus that the future is going to entail the Internet of Things (IoT), in which most everything will be connected and much of that will also be "smart."  It goes to what AI expert Kai-Fu Lee calls "OMO" -- online merges with offline.  He says: "As a next step, offline and online data can be combined...OMO and AI will take us into a future where any distinction between these worlds disappears."


Many believe that this future will be controlled by our voice assistants.  As MIT Technology Review put it: "Everything you own in the future will be controlled by your voice. That’s what this year’s CES, the world’s largest annual gadget bonanza, has made abundantly clear."

TechSpot agreed:  "One of the clearest developments that came out of 2018, and prominently on display last week at CES 2019, was the rise of the embedded voice assistant. Amazon’s Alexa and Google’s Assistant were omnipresent at the show...," although they also noted the problem of multiple voice assistants.  

I'm not convinced about voice assistants as the next big platform.  Yes, they will be more pervasive.  But look at it this way: if the voice advocates are right, then in 2025 we're going to be typing less and talking more, only we'll not be talking to each other, but to our ubiquitous devices.  That assumes we'll figure out how to make the voice assistants figure out what we're actually saying, and smart enough to know what do in response. 

That's not what I have a problem with.

We all have had the experience of someone talking too loudly near us on their mobile phone.  It's annoying.  It's distracting.  Imagine what it will be like when it is not just people actually making phone calls, but doing anything.  Imagine what it will be like when I'm talking to the car using Alexa while you're sitting next to me talking to Siri on your Apple wearable and the kids are in the backseat playing Fortnite using Google Assistant. 

We can think faster than we can read, we can read faster than we can talk, and we can (usually) talk faster than we can type.  The future of smart devices is certainly not keyboards but it's not talking either. 

We're going to control our surroundings, or at least the connected devices in it, with our brains, using a brain-computer interface.  It sounds like science fiction, or promises from the Singularity Hub, but it is starting to become real. 

a T9 performing a video search.
  b T9 searching for artists from a music streaming program.
Credit: Nuyujukian et. alia, Plos One
The BrainGate consortium, for example, has been working on this, with some successes.  In their most recent BrainGate 2 clinical trial, several paralyzed participants used a brain-computer interface (BCI) to control an off-the-shelf tablet. 

Study co-author Leigh Hockberg told IEEE Spectrum: "We wanted to see if we could allow somebody to control not an augmentative or alternative communication device, but the same ubiquitous device that people without physical disabilities use every day." 

They could.  Their paper reported:
one user noted, “[T]he tablet became second nature to me, very intuitive. It felt more natural than the times I remember using a mouse.” Another said, “[A]mazing! I have more control over this than what I normally use.” The third added that he “loved” sending text messages via the tablet.
This is cool stuff.  This is exciting stuff.  And it is the future, not just for those with impairments but for most of us.  

Healthcare took 10 - 15 years longer than other industries to really computerize, and is taking 5-10 years longer to get on the smartphone bandwagon.  It should quickly figure out what roles voice assistants can play -- EHR input, anyone? -- but maybe healthcare doesn't have to wait for the next truly big bandwagon to move by it before jumping on.

Tuesday, January 8, 2019

In God We Trust, All Others (Don't) Pay Cash

I was intrigued by a recent Wall Street Journal article about how some retail establishments won't accept cash as a form of payment, citing Drybar, Sweetgreen, and at least one Starbucks location.  Cashless is touted as faster, safer, easier to administer, and in line with most customers' preference.

Indeed, a new study from the Pew Research Center found that 29% of all U.S. adults don't use cash at all in their typical week, up from 24% in 2015.  The higher the household income, the less cash was used.
Credit: The Policy Times
Alistair Johnson writes in Forbes that, hey, if we're going to a cashless society, we should make it a cardless one as well, not simply replace our cash with those pieces of plastic we use for debit/credit.  I think he's on to something there, and both discussions made me think about how we change the constructs of our everyday lives -- including in healthcare.

First, let's get this out of the way: we're not going cashless anytime soon.  The WSJ reported that 30% of all transitions, and 55% of those under $10, still use cash.   There is more cash than ever -- about $1.7 trillion -- in circulation (the vast majority of which is in $100 bills).

Still, some countries -- notably, Sweden and China -- are racing towards a cashless future.  Cash in Sweden is less than 1% of GDP, and its central bank is seriously considering only issuing "e-krona."  As its central bank governor told The New York Times,
When you are where we are, it would be wrong to sit back with our arms crossed, doing nothing, and then just take note of the fact that cash has disappeared.  You can’t turn back time, but you do have to find a way to deal with change
In China, consumers are skipping the credit/debit phase and using their mobile devices.  Ninety-two percent of people in its major cities already use WeChat Pay or AliPay as their primary payment method.

Not everyone is sanguine about this possibility.  In the U.K., a new "Access to Cash" study warned that 17% of its population (eight million people) thought that cash remained a economic necessity.  The poor, people without access to fast internet connectivity, and people with physical/mental impairments were cited as being at high risk.

The report's great quote: "if we sleepwalk our way to a cashless society, we'll leave millions behind."

There have been similar concerns raised in Sweden, such as for the poor and the elderly. with the president of the National Pensioners Organization cautioning: "We aren’t against the digital movement, but we think it’s going a bit too fast."

Many critics point out to the increased loss of anonymity in a digital currency world, as well as the increased potential impact of cyberattacks, but it's not clear whether we've already crossed those thresholds.

Swedish microchip.  Credit: ZeroHedge
Mr. Johnson's central argument for cashless/cardless future is that we should be moving to "people's funded decentralized digital identities," using smartphones or smartwatches/fitness trackers.

He doesn't mention it, but Sweden already has thousands of people who have had microchips implanted that serve the same purpose. Jowan Osterlund, the founder of the leading microchip company there, told NPR:
Having different cards and tokens verifying your identity to a bunch of different systems just doesn't make sense.  Using a chip means that the hyper-connected surroundings that you live in every day can be streamlined.
If you've ever juggled to give your health care provider your drivers license, insurance card(s) and credit cards, or struggled to fill out (again) your medicine history, allergies, and medication list, you might appreciate his point.

We used to carry coins that had actual economic value, such as gold or silver, and over time those coins became of only symbolic value.  When we started using paper money, the promise was that it was at least backed by reserves of such actual things, but it has been a long time since most economies even pretended that was true for any of is currency.   When we started using credit cards, the premise was that merchants could trust the credit card companies about your ability to pay.

Money is a construct, and, at this point, most of it is simply notational.  It exists mostly virtually.

Our healthcare system is a lot like that.  Our various identification cards, payment cards, and even biometric measurements are based on an outdated model, one in which most information is assumed not to be available real-time.   Our insurance system is built on information asymmetries and cash flow barriers.

What might a healthcare system based on your "funded digital identity" look like?
Credit: Wearable Technologies
When you have an encounter with the healthcare system -- virtual, in an office, in a facility -- your chip/phone/watch/etc. is registered and:
  • it knows who you are;
  • it knows your health history, including any readings from biometric data you track;
  • it has detail from your previous (applicable) encounters, even from other clinicians;
  • it knows your various payment sources, including insurance/credit card/bank information;
  • it "negotiates" in real time the price of the service you're going to have, as well as how that will be funded and under what terms (perhaps using smart contracts);
  • it triggers immediate payment upon satisfaction of the promised service;
  • it schedules any follow-up visits or treatments.
No paper is generated.  No cash is exchanged.  No surprises about the bill.  No phone calls to any customer service.  Less reliance on insurance and more on a broader concept of funding.  

It sounds impossible, or at least like something from a far-off future.  Then, again, through the 1950's the idea of using credit cards for most purchases was something most people would have never considered, and here we are talking about a cashless society. 

It can be done, and it could be done by 2030.  Not 2130, not 2080, but 2030.  We just have to change our construct about how the healthcare system works. 

Or we can keeping paying the way we do now. 

Tuesday, January 1, 2019

2019 Forecast: Amara's Law

I have two predictions for 2019.

One is that at the end of 2019 our healthcare system will still look a lot like it looks now.  Oh, sure, we'll see some cool new technologies, some innovative start-ups, some surprising corporate pairings, some moves by Big Tech, and some promising clinical findings.   But our healthcare system moves slowly, and many in it have strongly vested interests in the status quo. 

The second prediction is that, more than ever, Amara's Law still prevails. 

In case you don't know this "law," it is attributed to Roy Amara, who was President of the Institute for the Future, among other things, and goes like this:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.  
There are several technologies which impact, but are not exclusive to, healthcare, that I believe we're in that short run of.  I'm going to talk about three in particular: data, and gene editing, and artificial intelligence.   

Data:

We're finally coming to realize how our personal data is not just at risk from hacking, but from the very institutions we've been entrusting to it, such as Facebook or Google.  Our data is being analyzed, bought and sold, manipulated, and used to target us in ways that we still don't fully understand,  Legislators and regulators are waking up to this, such as in Europe (GDPR) or California (Consumer Privacy Act). 

At the same time, there are signs that the long-awaited healthcare interoperability may be closer on the horizon than ever.  Apple's Health Records initiative and ONC's Trusted Exchange Framework and Common Agreement are encouraging examples. 

Here's the thing we're not still getting: we're never going to go back to any Golden Age of privacy, if there ever was one.  Data about you is going to be collected, and shared, by an ever-increasing number of devices in an ever-increasing number of ways from an ever increasing number of entities. 

E.g., it's not just your phone, it's every device your phone interacts with.  It's not just your phone, it's your wearable, your implants, your car, the street cameras, and so on.  Your EHR will talk not just to other EHRs but to, well, just about everything.  Future versions of us will laugh at the notion that interoperability was ever an issue, or that we'd have control over our data. 

We are not ready for a world in which there will be so much data about us, so widely available, and used in ways that haven't even been invented yet. 
Credit: Medical Research Council
Gene Editing

Much furor resulted from the recent claim that a Chinese researcher had genetically edited babies.  It's not ethical!  We haven't set rules yet!  But no one can really claim to have been surprised: it was only a matter of time before someone, somewhere, did something like this. 

We've gone from needing huge research efforts to sequence the human genome to being able to sequence our own for a few hundred dollars.  CRISPR is making gene editing dangerously easy.  Current gene therapies remain very pricey -- and will remain so if pharma has its way -- but they will become much more widespread. 

One of two things will happen, both with profound implications.  One is that gene editing remains very expensive -- perhaps out of the reach of most insurance.  Wealthy people would then disproportionately benefit from them, widening the current health inequities between rich and everyone else.  Only these advantages would become genetically baked in.

Or we'll have these therapies widely available, as happened with sequencing.  Anybody and everyone could tailor their genes and the genes of their children, not just "fixing" defects but altering them to suit personal preferences. 

Genes wouldn't even have to remain entirely "human."  Current genetic differences may seem trivial.

We are not ready for a world where some, or even most, can tailor their genetic make-up, in ways we haven't thought of.

Artificial Intelligence

A.I. has been promised literally for decades, and has consistently proved harder to achieve than expected.  As 2019 starts, though, there would be few who do not think that A.I. is going to play increasingly important roles in our lives.

In healthcare, A.I. leaders like IBM Watson or Babylon Health may have suffered some recent embarrassing setbacks. but there seems to be growing, if sometimes grudging, consensus that A.I. is going to used, especially to augment human doctors.  A.I. will, in this scenario, be able to analyze the data, sort through all the research, interpret the diagnostics, and assist doctors in diagnosing and pinpointing treatments.  Not using it would be like not using MRIs or prescription drugs.

We don't get it.  A.I. isn't just going to augment physicians and other clinicians.  It isn't going to exactly replace them either, doing what humans do but faster and more accurately.  They're going to do new things.

For example, Sandip Panesar recently wrote of the approaching surgical singularity.   He reports on how AI-controlled surgical robots are already operating on animals, and foresees when this moves to human patients.  They will have skills no human could hope for, and be able to do surgeries no human would dare. 

Google's DeepMind AlphaZero has taught itself complex games like chess and Go, and plays in ways that aren't like a human, but are new and surprising.  As Cornell professor Steven Strogatz wrote,  "It was humankind’s first glimpse of an awesome new kind of intelligence." 

Moreover, it is a kind of intelligence that we may not be able to understand.  In healthcare, it will find health problems and design treatments that we never could have and that it may not be able to "explain."   

A.I. won't be physically constrained, won't have a limited number of patients, and will be omnipresent.  More significantly, traditional healthcare approaches like medical licensing or maintenance of certification aren't going to be applicable. 

We are not ready for a healthcare system where our "doctors" are, for the most part, not humans -- nor think like them. 
Credit: Freedom and Society
We tend to think about the future as being very much like the present but with better tools, when, in fact, those tools change us and our institutions in ways we hadn't expected.  So it will be with these technologies. 

As exciting as 2019 may be for healthcare, it will be the calm before the storm.