Patient Zero: Astronauts

👋🏼

Artemis II, vehicle that would flyby moon, with humans.

… after Apollo 17th, which landed humans on the moon

Historical day of 2026, today, 4 astronauts strapped into a spacecraft to begin their 10-day journey around the Moon and back, the farthest humans have been in last 50 years. Before they ever reached the launchpad, their bodies were already being monitored: sleep schedule controlled, nutrition tracked, medical checks completed, every physiological variable accounted for. The mission hadn’t started, and the design problem already had.

That design problem is not unique to space. How do you monitor a body continuously and accurately when the nearest hospital is far away? For Artemis II crew, that distance is 252,000 miles. But the constaints NASA has spent decades solving arent exotic, they are fundamental problems of health monitoring, with less room for error. The technologies coming out off the that work, seamless sensing, autonomous diagnostics, human-legible health data, are already finding their way into everyday clinical and consumer contexts.

Astronauts have been monitored during spaceflight since the Gemini program in the 1960s. But for most of that history, heart rate was the only physiological parameter tracked during a spacewalk. The suit kept the astronauts alive, it didn’t tell you must about how they were doing inside it, which has changed!

NASA’s Bioastronautics Roadmap now classifies crew health monitoring during extravehicular activities as “Autonomous Medical Care”, recognizing that beyond a certain distance from Earth, you can’t rely on a ground team to catch every anomaly in real time. The monitoring has to be embedded, continuous and increasingly self-interpreting. The engineering constraints that follow from that requirement are worth mapping, because they show up again in consumer and clinical wearables, just at lower stakes. Sensors must integrate into the suit without interfering with its primary function. Electrical components must be isolated within a high-oxygen environments. Every connection is designed for quick disconnect in case of emergency. The data pipeline runs from body to suit to ground team, with redundancy at each step. The Canadian Space Agency’s Bio-monitor system, a lightweight garment with embedded biosensors that records ECG, blood pressure, oxygen, temperature, and respiration without implants, is one of the examples of what constraint set procedures when you optimize hard enough. It’s now used in clinical research on earth for everything from cardiovascular studies to sleep disorders.

That pattern of extreme constraints producing transferable technology is the through-line of this piece. The three scenarios that follow each start from a NASA design problem and trace where it leads in the broader future of health monitoring.

The Bio-monitor shirt works because it doesn’t feel like a medical device. In a spacesuit, there’s no room for discrete wearable sensing has to live in the garment itself. On Earth, that same requirement is driving a shift in health monitoring design. Flexible, skin conforming electronics are moving sensing off the wrist and onto the body. Ultrathin epidermal patches can now measure ECG, temperature, and strain simultaneously with clinical-grade accuracy. Conductive textiles and hydrogel-based sensors extend that further, tracking biomarkers in sweat and interstitial fluid continuously and non-invasively. The material challenges are largely being solved. What’s less resolved its the interface problem. A discrete wearable has a screen, a button, a visible indicator that it’s working. When the device disappears into the garment, so does the feedback loop. The wearer has no obvious way to know the system is functioning or what the data means. In regulated hardware, that feedback loop has always had to be explicitly designed, it doesn’t emerge from the form factor. The same applies here, sensing capability and usability are not the same thing, and the gap between them is a design problem.

Four days from Earth, if something goes wrong with one of the Artemis II crew, there’s no escalation path, no emergency consult or no option to transfer the patient. The monitoring system has to do more than collect data, and they would need to know what to do with it. That’s one of the different design requirement than what most health monitoring systems are built around today. A smartwatch flags an irregular heartbeat and tells you to see a doctor. A continuous glucose monitor sends an alert to a care team who interprets it. The division of labor where device captures, the clinician decides, works well when the clinician is reachable, but it’s brittle architecture. Edge AI changed that equation by moving the interpretive layer closer to the data source on the device itself, or on a local node, rather than a distant server or a human in the loop. Recent work has demonstrated CNN-LSTM Models (A CNN-LSTM model isa hybrid deep learning architecture that combines convolutional neural networks for spatial feature extraction with long short-term memory networks sequence modeling) deployed on edge hardware achieving over 91% accuracy in real-time anomaly detection for cardiac monitoring, with latency low enough to be clinically be useful. The design tension this introduces is one that regulated hardware designers already know well, autonomy and trust don’t scale together automatically. A systems that makes decisions without a clinician in the loop needs to be legible enough that the person wearing it, or the care team reviewing it later that can understand what it did and why. This would be a fundamental requirement and one that gets harder to satisfy.

An astronaut on a lunar EVA gets a readout from their suit. A flight surgeon on the ground could interpret that cluster of signals in context, but the astronaut has to act in the real time, in a pressurized suit, while completing a task. That gap between accurate and usable is one of the oldest unsolved problems in medical device design. Health data has historically been structured for clinicians, and as wearables move into everyday life, that design debt is becoming harder to ignore. the challenge isn’t simplification, dumbing down physiological data loses the nuance that makes it meaningful. What’s needed is translation, representing the same signal in a way that’s actionable for a non-expert without stripping out what makes it useful. In a regulated hardware, that translation layer has always been part of the design brief, surfacing the right information at the right moment is as consequential as the sensing itself.

The technologies at NASA has been developing to sense astronauts health through their garments, diagnostics that run without clinician in the loop, data that communicated clearly to the person wearing it. These capabilities would be needed on Earth with changing climates. The constraints of space produced a preview of where the field is heading and that trajectory is already visible in how clinical and consumer health hardware is evolving. For designers, with a background in regulated hardware, this moment feels less like a new direction and more like natural continuation. the problems are recognizable, complex constraint environments, high consequence outputs, feedback loops that have to be designed rather than assumed. Health monitoring is moving closer to the body and further and Artemis II will be another beginning for this frontier.


Next
Next

Designing for invisible constraints