+ What is an RSS feed?
Current feed - IEEE Spectrum

CADmore Metal has introduced a fresh take on 3D printing metal components to the North American market known as cold metal fusion (CMF). John Carrington, the company’s CEO, claims CMF produces stronger 3D printed metal parts that are cheaper and faster to make. That includes titanium components, which have historically caused trouble for 3D printers.
3D printing has used metals included aluminum, powdered steel, and nickel alloys for some time. While titanium parts are in high demand in fields such as aerospace and health care due to their superior strength-to-weight ratio, corrosion resistance, and their suitability for complex geometries, the metal has presented challenges for 3D printers.
Titanium becomes more reactive at high temperatures and tends to crack when the printed part cools. It can also become brittle as it absorbs hydrogen, oxygen, or nitrogen during the printing process. Carrington says CMF overcomes these issues.
“Our primary customers tend to come from the energy, defense, and aerospace industries,” says Carrington. “One large defense contractor recently switched from traditional 3D printing to CMF as it will save them millions and reduce prototyping and parts production by months.”
CMF combines the flexibility of 3D printing with new powder metallurgy processes to provide strength and greater durability to parts made from titanium and many other metals and alloys. The process uses a combination of proprietary metal powder and polymer binding agents that are fused layer by layer to create high-strength metal components.
The process begins like any other 3D printing project: A digital file that represents the desired 3D object directs the actions of a standard industrial 3D printer in laying down a mixture of metal and a plastic binder. A laser lightly fuses each layer of powder into a cohesive solid structure. Excess powder is removed for reuse.
Where CMF differs is that the initial parts generated by this stage of the process are strong enough for grinding, drilling, and milling if required. The parts then soak in a solvent to dissolve the plastic binder. Next, they go into a furnace to burn off any remaining binder, fuse the metal particles, and compact them into a dense metal component. Surface or finishing treatments can then be applied such as polishing and heat treatment.
“Our cold metal fusion technology offers a process that is at least three times faster and more scalable than any other kind of 3D printing,” says Carrington. “Per-part prices are generally 50 to 60 percent less than alternative metal 3D printing technology. We expect those prices to go down even more as we scale.”
3D printing with metal powders such as titanium makes it possible to create parts with complex geometries.CADmore Metal
The material used in CMF was developed by Headmade Materials, a German company. Headmade holds a patent on this 3D printing feedstock, which has been designed for use by the existing ecosystem of 3D printing machines. CADmore Metal serves as the exclusive North American distributor for the metal powders used in CMF. The company can also serve as a systems integrator for the entire process by providing the printing and sintering hardware, the specialized powders, process expertise, training, and technical support.
“We provide guidance on design optimization and integration with existing workflows to help customers maximize the technology’s benefits,” says Carrington. “If a turbine company comes to us to produce their parts using CMF, we can either build the parts for them as a service or set them up to carry out their own production internally while we supply the powder and support.”
With the global 3D printing market now worth almost US $5 billion and predicted to reach $13 billion by 2035, according to analyst firm IDTechEx, the arrival of CMF is timely. CADmore Metal just opened North America’s first CMF application center, a nearly 280-square-meter (3,000-square-foot) facility in Columbia, S.C. Carrington says that a larger facility will open in 2026 to make room for more material processing and equipment.

Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.
“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”
Employer MIT
Job title
Professor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence Laboratory
Member grade
Fellow
Alma maters
University of Iowa, in Iowa City; Cornell
Her dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.
Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”
An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.
The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.
Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.
“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”
Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.
In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.
“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”
America’s open horizons were intoxicating, she says.
Rus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.
For Rus, the idea was a revelation.
“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”
After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.
“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”
In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.
Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.
In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.
The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.
Rus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.
Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.
Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.
One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.
CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.
Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.
“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”
To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.
She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.
With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.
Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.
To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.
Rus joined IEEE at one of its robotics conferences when she was a graduate student.
“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”
She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.
“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”
Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.
“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”
Looking back, Rus sees her story as a testament to unforeseen possibilities.
“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”
In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!
A word that frequently comes up in career conversations is, unfortunately, “toxic.” The engineers I speak with will tell me that they’re dealing with a toxic manager, a toxic teammate, or a toxic work culture. When you find yourself in a toxic work environment, what should you do?
Is it worth trying to improve things over time, or should you just leave?
The difficult truth is that, in nearly every case, the answer is to leave a toxic team as soon as you can. Here’s why:
The world of technology is large, and constantly getting larger. Don’t waste your time on a bad team or with a bad manager. Find another team, company, or start something on your own.
Engineers often hesitate to leave a poor work environment because they’re afraid or unsure about the process of finding something new. That’s a valid concern. However, inertia should not be the reason you stick around in a job. The best careers stem from the excitement of actively choosing your work, not tolerating toxicity.
Finally, it’s worth noting that even in a toxic team, you’ll still come across smart and kind people. If you are stuck on a bad team, seek out the people who match your wavelength. These relationships will enable you to find new opportunities when you inevitably decide to leave!
—Rahul
Are you looking for a new podcast to add to your queue? IEEE Women in Engineering recently launched a podcast featuring experts from around the world to discuss workplace challenges and amplify the diverse experience of women from various STEM fields. New episodes are released on the third Wednesday of each month.
Entrepreneurship is a skill that can benefit all engineers. The editor in chief of IEEE Engineering Management Review shares his tips for acting more like an entrepreneur, from changing your mode of thinking to executing a plan. “The shift from ‘someone should’ to ‘I will’ is the start of entrepreneurial thinking,” the author writes.
In a piece for Communications of the ACM, a former employee of Xerox PARC reflects on the lessons he learned about managing a research lab. The philosophies that underpin innovative labs, the author says, require a different approach than those focused on delivering products or services. See how these unwritten rules can help cultivate breakthroughs.

Learn how hardware-in-the-loop testing validates protection schemes, renewable integration, and HVDC systems before deployment. Download this introduction to real-time power system simulation.
In this white paper, you’ll learn:

When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s doing something rare in telecom: admitting a flagship technology didn’t quite work out as planned.
That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do.
By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore.
AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously.
But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch.
Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution.
In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room.
IEEE Spectrum: Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?
Peter Vetter: Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam. We have traffic simulations. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that.
And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells.
Is this related to the backhaul question?
Vetter: So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul.
Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.Nokia
What are the challenges ahead for wireless backhaul?
Vetter: To get up to the 100-gigabit-per-second, fiber-like speeds, you need to go to higher frequency bands.
Higher frequency bands for the signals the backhaul antennas use?
Vetter: Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds.
And what happens as those signal frequencies get higher?
Vetter: So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high.
What happens when you get to, say, 100 GHz?
Vetter: [Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the subterahertz radio range.
Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100-GHz communications?
Vetter: I used an order of magnitude. Above 100 GHz, you need to look into a different material. But [the wavelength range] is actually 140 to 170 GHz, what is called the D-Band.
We collaborate with our internal customers to get these kind of concepts on the long-term road map. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at the Olympics in Paris.
But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that is Shahriar Shahramian. He’s well-known in the field for this.
What will be the applications that’ll drive the big 6G demands for bandwidth?
Vetter: We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from.
Any others?
Vetter: Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models.
You will have your agents that act on behalf of you to do your groceries or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way.
You’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?
Vetter: The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning.
What are your teams’ findings?
Vetter: The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning.
We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave.
Bell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?
Vetter: So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas.
So if you double the frequency, we go from 3.5 GHz, which is the C-band in 5G, now to 6G, 7 GHz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense.
What’s the catch?
Vetter: Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?
The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance?
We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum.
And that allows many gigabits per second to go out to each phone or device?
Vetter: So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?

Talking to Robert N. Charette can be pretty depressing. Charette, who has been writing about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who over the course of a 50-year career has seen more than his share of delusional thinking among IT professionals, government officials, and corporate executives, before, during, and after massive software failures.
In 2005’s “Why Software Fails,” in IEEE Spectrum, a seminal article documenting the causes behind large-scale software failures, Charette noted, “The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don’t see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.”
Two decades and several trillion wasted dollars later, he finds that people are making the same mistakes. They claim their project is unique, so past lessons don’t apply. They underestimate complexity. Managers come out of the gate with unrealistic budgets and timelines. Testing is inadequate or skipped entirely. Vendor promises that are too good to be true are taken at face value. Newer development approaches like DevOps or AI copilots are implemented without proper training or the organizational change necessary to make the most of them.
What’s worse, the huge impacts of these missteps on end users aren’t fully accounted for. When the Canadian government’s Phoenix paycheck system initially failed, for instance, the developers glossed over the protracted financial and emotional distress inflicted on tens of thousands of employees receiving erroneous paychecks; problems persist today, nine years later. Perhaps that’s because, as Charette told me recently, IT project managers don’t have professional licensing requirements and are rarely, if ever, held legally liable for software debacles.
While medical devices may seem a far cry from giant IT projects, they have a few things in common. As Special Projects Editor Stephen Cass uncovered in this month’s The Data, the U.S. Food and Drug Administration recalls on average 20 medical devices per month due to software issues.
“Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” —Robert N. Charette
Like IT projects, medical devices face fundamental challenges posed by software complexity. Which means that testing, though rigorous and regulated in the medical domain, can’t possibly cover every scenario or every line of code. The major difference between failed medical devices and failed IT projects is that a huge amount of liability attaches to the former.
“When you’re building software for medical devices, there are a lot more standards that have to be met and a lot more concern about the consequences of failure,” Charette observes. “Because when those things don’t work, there’s tort law available, which means manufacturers are on the hook. It’s much harder to bring a case and win when you’re talking about an electronic payroll system.”
Whether a software failure is hyperlocal, as when a medical device fails inside your body, or spread across an entire region, like when an airline’s ticketing system crashes, organizations need to dig into the root causes and apply those lessons to the next device or IT project if they hope to stop history from repeating itself.
“Software is as significant as electricity,” Charette says. “We would never put up with electricity going out every other day, but we sure as hell have no problem accepting AWS going down or telcos or banks going out.” He lets out a heavy sigh worthy of A.A. Milne’s Eeyore. “People just kind of shrug their shoulders.”

Innovation, expertise, and efficiency often take center stage in the engineering world. Yet engineering’s impact lies not only in technical advancement but also in its ability to serve the greater good. This foundational principle is behind IEEE’s public imperative initiatives which apply our efforts and expertise to support our mission to advance technology for humanity with a direct benefit to society.
Public imperative activities and initiatives serve society by promoting understanding, impact for humans and our environment, and responsible use of science and technology. These initiatives encompass a wide range of efforts, including STEM outreach, humanitarian technology deployments, public education on emerging technologies, and sustainability. Unlike many efforts advancing technology, these initiatives are not designed with financial opportunity in mind. Instead, they fulfill IEEE’s designation as a 501(c)(3) public charity engaged in scientific and educational activities for the benefit of the engineering community and the public.
Across the globe, IEEE members and volunteers dedicate their time and use their talents, experiences, and expertise to lead, organize, and drive activities to advance technology for humanity. The IEEE Social Impact report showcases a selection of recent projects and initiatives that support that mission.
In my March column, I described my vision for One IEEE, which is aimed at empowering IEEE’s diverse units to work together in ways that magnify their individual and collective impact. Within the framework of One IEEE, public imperative activities are not peripheral; they are central to unifying the organization and amplifying our global relevance. Across IEEE’s varied regions, societies, and technical communities, these activities align efforts around a shared mission. They provide our members from different disciplines and geographies the opportunity to collaborate on projects that transcend boundaries, fostering interdisciplinary innovation and global stewardship.
Such activities also offer members opportunities to apply their technical expertise in service of societal needs. Whether finding innovative solutions to connect the unconnected or developing open-source educational tools for students, we are solving real-world problems. The initiatives transform abstract technical knowledge into actionable solutions, reinforcing the idea that technology is not just about building systems—it’s about building futures.
For our young professionals and students, these activities offer hands-on experiences that connect technical skills with real-world applications, inspiring the next generation to pursue careers in engineering with purpose and passion. These activities also create mentorship opportunities, leadership pathways, and a sense of belonging within the wider IEEE community.
In an age when technology influences practically every aspect of life—from health care and energy to communication and transportation—IEEE must, as a leading technical authority, also serve as a socially responsible leader. Public imperative activities include IEEE’s commitment to ethical development, university and pre-university education, and accessible innovation. They help bridge the gap between technical communities and the public, working to ensure that engineering solutions are accessible, equitable, and aligned with societal values.
From a strategic standpoint, public imperatives also support IEEE’s long-term sustainability. The organization is redesigning its budget process to emphasize aligning financial resources with mission-driven goals. One of the guiding principles is to publicize IEEE’s public charity status and invest accordingly.
That means promoting our public imperatives in funding decisions, integrating them into operational planning, and measuring their outcomes with engineering rigor. By treating these activities as core infrastructure, IEEE ensures that its resources are deployed in ways that maximize public benefit and organizational impact.
Public imperatives are vital to the success of One IEEE. They embody the organization’s mission, unify its global membership, and demonstrate the societal relevance of engineering and technology. They offer our members the opportunity to apply their skills in meaningful ways, contribute to public good, and shape the future of technology with integrity.
Through our public imperative activities, IEEE is a force for innovation and a driver of meaningful impact.
This article appears in the December 2025 print issue as “Engineering With Purpose.”

For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier.
But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like?
In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces.
The history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences.
We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years.
Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings.
In an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback.
For example, language models can already generate code in a simple chat setting. Place them in a live coding environment—where they can ingest context, run their code, debug errors, and refine their solution—and something changes. They shift from advising to autonomously problem-solving.
This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions.
Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications.
Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan.
Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful.
The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.

Introduced in 1930 by Lionel Corp.—better known for its electric model trains—the fully functional toy stove shown at top had two electric burners and an oven that heated to 260 °C. It came with a set of cookware, including a frying pan, a pot with lid, a muffin tin, a tea kettle, and a wooden potato masher. I would have also expected a spoon, whisk, or spatula, but maybe most girls already had those. Just plug in the toy, and housewives-in-training could mimic their mothers frying eggs, baking muffins, or boiling water for tea.
Even before electrification, cast-iron toy stoves had become popular in the mid-19th century. At first fueled by coal or alcohol and later by oil or gas, these toy stoves were scaled-down working equivalents of the real thing. Girls could use their stoves along with a toy waffle iron or small skillet to whip up breakfast. If that wasn’t enough fun, they could heat up a miniature flatiron and iron their dolls’ clothes. Designed to help girls understand their domestic duties, these toys were the gendered equivalent of their brothers’ toy steam engines. If you’re thinking fossil-fuel-powered “educational toys” are a recipe for disaster, you are correct. Many children suffered serious burns and sometimes death by literally playing with fire. Then again, people in the 1950s thought playing with uranium was safe.
When electric toy stoves came on the scene in the 1910s, things didn’t get much safer, as the new entrants also lacked basic safety features. The burners on the 1930 Lionel range, for example, could only be turned off or on, but at least kids weren’t cooking over an open flame. At 86 centimeters tall, the Lionel range was also significantly larger than its more diminutive predecessors. Just the right height for young children to cook standing up.
Western Electric’s Junior Electric Range was demonstrated at an expo in 1915 in New York City.The Strong
Well before the Lionel stove, the Western Electric Co. had a cohort of girls demonstrating its Junior Electric Range at the Electrical Exposition held in New York City in 1915. The Junior Electric held its own in a display of regular sewing-machine motors, vacuum cleaners, and electric washing machines.
The Junior Electric stood about 30 cm tall with six burners and an oven. The electric cord plugged into a light fixture socket. Children played with it while sitting on the floor or as it sat on a table. A visitor to the Expo declared the miniature range “the greatest electrical novelty in years.” Cooking by electricity in any form was still innovative—George A. Hughes had introduced his eponymous electric range just five years earlier. When the Junior Electric came along, less than a third of U.S. households had been wired for electric lights.
One reason to give little girls working toy stoves was so they could learn how to differentiate between a hot flame and low heat and get a feel for cooking without burning the food. These are skills that come with experience. Directions like “bake until done in a moderate oven,” a common line in 19th-century recipes, require a lot more tacit knowledge than is needed to, say, throw together a modern boxed brownie mix. The latter comes with detailed instructions and assumes you can control your oven temperature to within a few degrees. That type of precision simply didn’t exist in the 19th century, in large part because it was so difficult to calibrate wood- or coal-burning appliances. Girls needed to start young to master these skills by the time they married and were expected to handle the household cooking on their own.
Electricity changed the game.
In his comparison of “fireless cookers,” an engineer named Percy Wilcox Gumaer exhaustively tested four different electric ovens and then presented his findings at the 32nd Annual Convention of the American Institute of Electrical Engineers (a forerunner of today’s IEEE) on 2 July 1915. At the time, metered electricity was more expensive than gas or coal, so Gumaer investigated the most economical form of cooking with electricity, comparing different approaches such as longer cooking at low heat versus faster cooking in a hotter oven, the effect of heat loss when opening the oven door, and the benefits of searing meat on the stovetop versus in the oven before making a roast.
Gumaer wasn’t starting from scratch. Similar to how Yoshitada Minami needed to learn the ideal rice recipe before he could design an automatic rice cooker, Gumaer decided that he needed to understand the principles of roasting beef. Minami had turned to his wife, Fumiko, who spent five years researching and testing variations of rice cooking. Gumaer turned to the work of Elizabeth C. Sprague, a research assistant in nutrition investigations at the University of Illinois, and H.S. Grindley, a professor of general chemistry there.
In their 1907 publication “A Precise Method of Roasting Beef,” Sprague and Grindley had defined qualitative terms like medium rare and well done by precisely measuring the internal temperature in the center of the roast. They concluded that beef could be roasted at an oven temperature between 100 and 200 °C.
Continuing that investigation, Gumaer tested 22 roasts at 100, 120, 140, 160, and 180 °C, measuring the time they took to reach rare, medium rare, and well done, and calculating the cost per kilowatt-hour. He repeated his tests for biscuits, bread, and sponge cake.
In case you’re wondering, Gumaer determined that cooking with electricity could be a few cents cheaper than other methods if you roasted the beef at 120 °C instead of 180 °C. It’s also more cost-effective to sear beef on the stovetop rather than in the oven. Biscuits tasted best when baked at 200 to 240 °C, while sponge cake was best between 170 and 200 °C. Bread was better at 180 to 240 °C, but too many other factors affected its quality. In true electrical engineering fashion, Gumaer concluded that “it is possible to reduce the art of cooking with electricity to an exact science.”
This semester, I’m teaching an introductory class on women’s and gender studies, and I told my students about the Lionel toy oven. They were horrified by the inherent danger. One incredulous student kept asking, “This is real? This is not a joke?” Instead of learning to cook with a toy that could heat to 260 °C, many of us grew up with the Easy-Bake Oven. The 1969 model could reach about 177° C with its two 100-watt incandescent light bulbs. That was still hot enough to cause burns, but somehow it seemed safer. (Since 2011, Easy-Bakes have used a heating element instead of lightbulbs.)
The Queasy Bake Cookerator, designed to whip up “gross-looking, great-tasting snacks,” was marketed to boys. The Strong
The Easy-Bake I had wasn’t particularly gendered. It was orange and brown and meant to look like a different new-fangled appliance of the day, the microwave oven. But by the time my students were playing with Easy-Bake Ovens, the models were in the girly hues of pink and purple. In 2002, Hasbro briefly tried to lure boys by releasing the Queasy Bake Cookerator, which the company marketed with disgusting-sounding foods like Chocolate Crud Cake and Mucky Mud. The campaign didn’t work, and the toy was soon withdrawn.
Similarly, Lionel’s electric toy range didn’t last long on the market. Launched in 1930, it had been discontinued by 1932, but that may have had more to do with timing. The toy cost US $29.50, the equivalent of a men’s suit, a new bed, or a month’s rent. In the midst of a global depression, the toy stove was an extravagance. Lionel reverted to selling electric trains to boys.
My students discussed whether cooking is still a gendered activity. Although they agreed that meal prep disproportionately falls on women even now, they acknowledged the rise of the male chef and credited televised cooking shows with closing the gender gap. As a surprise, we discovered that one of the students in the class, Haley Mattes, competed in and won Chopped Junior as a 12-year-old.
Haley had a play kitchen as a kid that was entirely fake: fake food, fake pans, fake utensils. She graduated to the Easy-Bake Oven, but really got into cooking the same way girls have done for centuries, by learning beside her grandmas.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the December 2025 print issue as “Too Hot to Handle.”
I first came across a description of Western Electric’s Junior Electric Range in “The Latest in Current Consuming Devices,” in the November 1915 issue of Electrical Age.
The Strong National Museum of Play, in Rochester, N.Y., has a large collection of both cast-iron and electric stoves. The Strong also published two blogs that highlighted Lionel’s toy: “Kids and Cooking” and “Lionel for Ladies?”
Although Ron Hollander’s All Aboard! The Story of Joshua Lionel Cowen & His Lionel Train Company (Workman Publishing, 1981) is primarily about toy trains, it includes a few details about how Lionel marketed its electric toy stove to girls.

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Step behind the scenes with Walt Disney Imagineering Research & Development and discover how Disney uses robotics, AI, and immersive technology to bring stories to life! From the brand new self-walking Olaf in World of Frozen and BDX Droids to cutting-edge attractions like Millennium Falcon: Smugglers Run, see how magic meets innovation.
We just released a new demonstration of Mentee’s V3 humanoid robots completing a real world logistics task together. Over an uninterrupted 18-minute run, the robots autonomously move 32 boxes from eight piles to storage racks of different heights. The video shows steady locomotion, dexterous manipulation, and reliable coordination throughout the entire task.
And there’s an uncut 18 minute version of this at the link.
[ MenteeBot ]
Thanks, Yovav!
This video contains graphic depictions of simulated injuries. Viewer discretion is advised.
In this immersive overview, guided by the DARPA Triage Challenge program manager, retired Army Col. Jeremy C. Pamplin, M.D., you’ll experience how teams of innovators, engineers, and DARPA are redefining the future of combat casualty care. Be sure to look all around! Check out competition runs, behind-the-scenes of what it takes to put on a DARPA Challenge, and glimpses into the future of lifesaving care.
Those couple of minutes starting at 6:50 with the human medic and robotic teaming was particularly cool.
[ DARPA ]
You don’t need to build a humanoid robot if you can just make existing humanoids a lot better.
I especially love 0:45 because you know what? Humanoids should spend more time sitting down, for all kinds of reasons. And of course, thank you for falling and getting up again, albeit on some of the squishiest grass on the planet.
[ Flexion ]
“Human-in-the-Loop Gaussian Splatting” wins best paper title of the week.
[ Paper ] via [ IEEE Robotics and Automation Letters in IEEE Xplore ]
Scratch that, “Extremum Seeking Controlled Wiggling for Tactile Insertion” wins best paper title of the week.
[ University of Maryland PRG ]
The battery swapping on this thing is... Unfortunate.
[ LimX Dynamics ]
To push the boundaries of robotic capability, researchers in the Department of Mechanical Engineering at Carnegie Mellon University in collaboration with The University of Washington and Google Deepmind, have developed a new tactile sensing system that enables four-legged robots to carry unsecured, cylindrical objects on their backs. This system, known as LocoTouch, features a network of tactile sensors that spans the robot’s entire back. As an object shifts, the sensors provide real-time feedback on its position, allowing the robot to continuously adjust its posture and movement to keep the object balanced.
[ Carnegie Mellon University ]
This robot is in more need of googly eyes than any other robot I’ve ever seen.
[ Zarrouk Lab ]
DPR Construction has deployed Field AI’s autonomy software on a quadruped robot at the company’s job site in Santa Clara, CA, to greatly improve its daily surveying and data collection processes. By automating what has traditionally been a very labor intensive and time consuming process, Field AI is helping the DPR team operate more efficiently and effectively, while increasing project quality.
[ FieldAI ]
In our second episode of AI in Motion, our host, Waymo AI researcher Vincent Vanhoucke, talks with a robotics startup founder Sergey Levine, who left a career in academic research to build better robots for the home and workplace.
[ Waymo ]