Are You Ready for Workplace Brain Scanning? – IEEE Spectrum

November 23, 2022
3
Views

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Extracting and using brain data will make workers happier and more productive, backers say
Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.
To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.
The fundamental technology that these companies rely on is not new: Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals.

What is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars.
While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says James Giordano, chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.”
InnerEye Security Screening Demoyoutu.be
In an office in Herzliya, Israel, Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents.
“Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds.
Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier.
What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image.

An illustration of a person in front of screens with suitcases above it.  InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.Chris Philpot
Vaisman is the vice president of R&D of InnerEye, an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process.
While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again.
Examine the sample images below, and then try to spot the target among the nontargets.
Ten images are displayed every second for five seconds on loop. There are three targets.
A pair of black and white images.  The left is labelled "non target" and the right is "target." there is a red circle around a black line on the right image.
A gif of a black and white static image
Three images are displayed every second for five seconds on loop. There is one weapon.
A gif of x-rayed pieces of luggage.InnerEye
Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is.
“We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer Amir Geva. “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain.
Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.”
In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains.
InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace.
Workers wearing earbuds sit in an office in front of computers.Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music.Emotiv
When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system, which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds.
Tan Le, Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams.
Illustration of head with an earpiece in.  With columns of data on either side.The Emotiv Experience Chris Philpot
Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes.
To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker.
Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “ bossware” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says.
Lee Daniels, a consultant who works for the global real estate services company JLL, has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says.
Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers.
“The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO
Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says.
Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts.
The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague.
While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress.
Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly.
However, workers may still be worried that employers will somehow use the data against them. Karen Rommelfanger, founder of the Institute of Neuroethics, shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.”
Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says.
Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in trucking, construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue, is already on the market for truck drivers and miners.
Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says.
Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.”
This article appears in the December 2022 print issue.
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master’s degree in journalism from Columbia University.
The ban spotlights semiconductors for supercomputers; China hasn’t yet responded to restrictions
It has now been over a month since the U.S. Commerce Department issued new rules that clamped down on the export of certain advanced chips—which have military or AI applications—to Chinese customers.
China has yet to respond—but Beijing has multiple options in its arsenal. It’s unlikely, experts say, that the U.S. actions will be the last fighting word in an industry that is becoming more geopolitically sensitive by the day.
This is not the first time that the U.S. government has constrained the flow of chips to its perceived adversaries. Previously, the United States hasblocked chip sales to individual Chinese customers. In response to the Russian invasion of Ukraine earlier this year, the United States (along with several other countries, including South Korea and Taiwan) placed Russia under a chip embargo.
But none of these prior U.S. chip bans were as broad as the new rules, issued on 7 October. “This announcement is perhaps the most expansive export control in decades,” says Sujai Shivakumar, an analyst at the Center for International and Strategic Studies, in Washington.
The rules prohibit the sale, to Chinese customers, of advanced chips with both high performance (at least 300 trillion operations per second, or 300 teraops) and fast interconnect speed (generally, at least 600 gigabytes per second). Nvidia’s A100, for comparison, is capable of over 600 teraops and matches the 600 Gb/s interconnect speed. Nvidia’s more-impressive H100 can reach nearly 4,000 trillion operations and 900 Gb/s. Both chips, intended for data centers and AI trainers, cannot be sold to Chinese customers under the new rules.

Additionally, the rules restrict the sale of fabrication equipment if it will knowingly be used to make certain classes of advanced logic or memory chips. This includes logic chips produced at nodes of 16 nanometers or less (which the likes of Intel, Samsung, and TSMC have done since the early 2010s); NAND long-term memory integrated circuits with at least 128 layers (the state of the art today); or DRAM short-term memory integrated circuits produced at 18 nanometers or less (which Samsung began making in 2016).
Chinese chipmakers have barely scratched the surface of those numbers. SMIC switched on 14-nm mass production this year, despite facing existing U.S. sanctions. YMTC started shipping 128-layer NAND chips last year.
The rules restrict not just U.S. companies, but citizens and permanent residents as well. U.S. employees at Chinese semiconductor firms have had to pack up. ASML, a Dutch maker of fabrication equipment, has told U.S. employees to stop servicing Chinese customers.

Speaking of Chinese customers, most—including offices, gamers, designers of smaller chips—probably won’t feel the controls. “Most chip trade and chip production in China is unimpacted,” says Christopher Miller, a historian who studies the semiconductor trade at Tufts University.
The controlled sorts of chips instead go into supercomputers and large data centers, and they’re desirable for training and running large machine-learning models. Most of all, the United States hopes to stop Beijing from using chips to enhance its military—and potentially preempt an invasion of Taiwan, where the vast majority of the world’s semiconductors and microprocessors are produced.
In order to seal off one potential bypass, the controls also apply to non-U.S. firms that rely on U.S.-made equipment or software. For instance, Taiwanese or South Korean chipmakers can’t sell Chinese customers advanced chips that are fabricated with U.S.-made technology.
It’s possible to apply to the U.S. government for an exemption from at least some of the restrictions. Taiwanese fab juggernaut TSMC and South Korean chipmaker SK Hynix, for instance, have already acquired temporary exemptions—for a year. “What happens after that is difficult to say,” says Patrick Schröder, a researcher at Chatham House in London. And the Commerce Department has already stated that such licenses will be the exception, not the rule (although Commerce Department undersecretary Alan Estevez suggested that around two-thirds of licenses get approved).
More export controls may be en route. Estevez indicated that the government is considering placing restrictions on technologies in other sensitive fields—specifically mentioning quantum information science and biotechnology, both of which have seen China-based researchers forge major progress in the past decade.
The Chinese government has so far retorted with harsh words and little action. “We don’t know whether their response will be an immediate reaction or whether they have a longer-term approach to dealing with this,” says Shivakumar. “It’s speculation at this point.”

Beijing could work with foreign companies whose revenue in the lucrative Chinese market is now under threat. “I’m really not aware of a particular company that thinks it’s coming out a winner in this,” says Shivakumar. This week, in the eastern city of Hefei, the Chinese government hosted a chipmakers’ conference whose attendees included U.S. firms AMD, Intel, and Qualcomm.
Nvidia has already responded by introducing a China-specific chip, the A800, which appears to be a modified A100 cut down to meet the requirements. Analysts say that Nvidia’s approach could be a model for other companies to keep up Chinese sales.
There may be other tools the Chinese government can exploit. While China may be dependent on foreign semiconductors, foreign electronics manufacturers are in turn dependent on China for rare-earth metals—and China supplies the supermajority of the world’s rare earths.
There is precedent for China curtailing its rare-earth supply for geopolitical leverage. In 2010, a Chinese fishing boat collided with two Japanese Coast Guard vessels, triggering an international incident when Japanese authorities arrested the boat’s captain. In response, the Chinese government cut off rare-earth exports to Japan for several months.
Certainly, much of the conversation has focused on the U.S. action and the Chinese reaction. But for third parties, the entire dispute delivers constant reminders of just how tense and volatile the chip supply can be. In the European Union, home to less than 10 percent of the world’s microchips market, the debate has bolstered interest in the prospective European Chips Act, a plan to heavily invest in fabrication in Europe. “For Europe in particular, it’s important not to get caught up in this U.S.-China trade issue,” Schröder says.
“The way in which the semiconductor industry has evolved over the past few decades has predicated on a relatively stable geopolitical order,” says Shivakumar. “Obviously, the ground realities have shifted.”
Batteries expose supply-chain and skills gaps
Robert N. Charette is a Contributing Editor to IEEE Spectrum and an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.
A General Motors Hummer EV chassis sits in front of a Hummer EV outside an event where GM CEO Mary Barra announced a US $7 billion investment in EV and battery production in Michigan in January 2022.
“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.
EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.
The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.
It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”
Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.
Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birolobserves, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”
This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”
The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”
One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they're not in control of the software, they're not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis.
Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.
Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages.

The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.
“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon
One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.
Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’sInstitute for Sustainable Futures.

While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them.
An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen.
Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.”
In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.
Intensive clinical collaboration is fueling growth of NYU Tandon’s biomedical engineering program
Dexter Johnson is a contributing editor at IEEE Spectrum, with a focus on nanotechnology.
This optical tomography device that can be used to recognize and track breast cancer, without the negative effects of previous imaging technology. It uses near-infrared light to shine into breast tissue and measure light attenuation that is caused by the propagation through the affected tissue.
This is a sponsored article brought to you by NYU’s Tandon School of Engineering.
When Andreas H. Hielscher, the chair of the biomedical engineering (BME) department at NYU’s Tandon School of Engineering, arrived at his new position, he saw raw potential. NYU Tandon had undergone a meteoric rise in its U.S. News & World Report graduate ranking in recent years, skyrocketing 47 spots since 2009. At the same time, the NYU Grossman School of Medicine had shot from the thirties to the #2 spot in the country for research. The two scientific powerhouses, sitting on opposite banks of the East River, offered Hielscher a unique opportunity: to work at the intersection of engineering and healthcare research, with the unmet clinical needs and clinician feedback from NYU’s world-renowned medical program directly informing new areas of development, exploration, and testing.
“There is now an understanding that technology coming from a biomedical engineering department can play a big role for a top-tier medical school,” said Hielscher. “At some point, everybody needs to have a BME department.”
In the early days of biomedical engineering departments nationwide, there was some resistance even to the notion of biomedical engineering: either you were an electrical engineer or a mechanical engineer. “That’s no longer the case,” said Hielscher. “The combining of the biology and medical aspects with the engineering aspects has been proven to be the best approach.”
Dr. Andreas Hielscher, NYU Tandon Biomedical Engineering Department Chair and head of the Clinical Biophotonics Laboratory, speaks with IEEE Spectrum about his work leveraging optical tomography for early detection and treatment monitoring for breast cancer.
The proof of this can be seen by the trend that an undergraduate biomedical degree has become one of the most desired engineering degrees, according to Hielscher. He also noted that the current Dean of NYU’s Tandon School of Engineering, Jelena Kovačević, has a biomedical engineering background, having just received the 2022 IEEE Engineering in Medicine and Biology Society career achievement award for her pioneering research related to signal processing applications for biomedical imaging.
Mary Cowman, a pioneer in joint and cartilage regeneration, began laying the foundations for NYU Tandon’s biomedical engineering department in the 2010s. Since her retirement in 2020, Hielscher has continued to grow the department through innovative collaborations with the medical school and medical center, including the recently-announced Translational Healthcare Initiative, on which Hielscher worked closely with Daniel Sodickson, the co-director of the medical school’s Tech4Health.
Andreas Hielscher joined NYU Tandon in 2020 as Professor and Chair of the Department of Biomedical Engineering.
NYU Tandon
“The fundamental idea of the Initiative is to have one physician from Langone Medical School, and one engineer at least—you could have multiple—and have them address some unmet clinical needs, some particular problem,” explained Hielscher. “In many cases they have already worked together, or researched this issue. What this initiative is about is to give these groups funding to do some experimentation to either prove that it won’t work, or demonstrate that it can and prioritize it.”

With this funding of further experimentation, it becomes possible to develop the technology to a point where you could begin to bring investors in, according Hielscher. “This mitigates the risk of the technology and helps attract potential investors,” added Hielscher. “At that point, perhaps a medical device company comes in, or some angel investor, and then you can get to the next level of investment for moving the technology forward.”
Hielscher himself has been leading research on developing new technologies within the Clinical Biophotonics Laboratory. One of the latest areas of research has been investigating the application of optical technologies to breast cancer diagnosis.
Cross sections of a breast with a tumor during a breath hold, taken with a dynamic optical tomographic breast imaging system developed by Dr. Hielscher, As a patient holds their breath, the blood concentration increases by up to 10 percent (seen in red). Dr. Hielscher’s team found that analyzing the increase and decrease in blood concentrations inside a tumor could help them determine which patients would respond to chemotherapy.
A.H. Hielscher, Clinical Biophotonics Laboratory
Hielscher and his colleagues have built a system that shines light through both breasts at the same time. By measuring how much light is reflected back it’s possible to generate maps of locations with high levels of oxygen and total hemoglobin, which may indicate tumors.
“We look at where there’s blood in the breast,” explained Hielscher. “Because breast tumors recruit new blood vessels, or, once they grow, they generate their own vascular network requiring more oxygen, wherever there is a tumor you will see an increase in total blood volume, and you will see more oxygenated blood.”
Initially, this diagnostic tool was targeted for early detection, since mammograms can only detect calcification in lower density breast tissue of women over a certain age. But it soon became clear in collaboration with clinical partners that it was also highly effective in monitoring treatment.
“Technology coming from a biomedical engineering department can play a big role for a top-tier medical school”
—Andreas H. Hielscher, Biomedical Engineering Department Chair, NYU Tandon
This realization came in part because of a recent change in cancer treatment that has moved towards what is known as neoadjuvant chemotherapy, in which chemotherapy drugs are administered before surgical extraction of the tumor. One of the drawbacks of this approach is that only around 60 percent of patients respond favorably to the chemotherapy, resulting in a large percentage of patients suffering through a grueling six-month-long chemotherapy treatment with minimal-to-no impact on the tumor.
With the optical technique, Hielscher and his colleagues have found that if they can detect a noticeable decrease of blood in targeted areas after two weeks, it’s very likely that the patient will respond to the chemotherapy. On the other hand, if they see that the amount of blood in that area stays the same, then there’s a very high likelihood that the patient will not respond to the therapy.
This same fundamental technique can also be applied to what is known as peripheral artery disease (PAD), which affects many patients with diabetes and involves the narrowing or blockage of the vessels that carry blood from the heart to the legs. An Israel-based company called VOTIS has licensed the technology for diagnosing and treating PAD.
Example of a frequency-domain image of a finger joint (proximal interphalangeal joint of index finger) affected by lupus arthritis.
A.H. Hielscher, Clinical Biophotonics Laboratory
While Hielscher’s work is in biophotonics, he recognized that the department has also quickly been developing a reputation in other emerging areas, including wearables, synthetic biology, and neurorehabilitation and stroke prediction.
Hielscher highlighted the recent work of Rose Faghih, working in smart wearables and data for mental health, Jef Boeke, a synthetic biology pioneer, and S. Farokh Atashzar, doing work in neurorehabilitation and stroke prediction. Atashzar’s work was highlighted last year in the pages of IEEE Spectrum.
“Rose Faghih is leveraging all kinds of sensors to make inferences about the mental state of patients, to determine if someone is depressed or schizophrenic, and then possibly have a feedback loop where you actually also treat them,” said Hielscher. “Jef Boeke is involved in what I term ‘wet engineering,’ and is currently involved in efforts to take cancer cells outside of the body to find a way to attack them, or reprogram them.”
As NYU Tandon’s BME department goes forward, Hielscher’s aim is that the department becomes a trusted source for the medical school, and that partnership enables key technologies to go from an unmet clinical need or an idea in a lab to a patient’s bedside in a 3-5 year timeframe.
“What I really would like, “Hielscher concluded, “is that if somebody in the medical school has a problem, the first thing they would say is, ‘Oh, I’ll call the engineering school. I bet there’s somebody there that can help me.’ We can work together to benefit patients, and we’re starting this already.”

source

Article Tags:
·
Article Categories:
Office · Technology

Leave a Reply

Your email address will not be published.

The maximum upload file size: 512 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here