Supreme Court Denies Appeal from Republican AGs on Biden’s Climate Regulations for Power Plants

Supreme Court rejects Republican state AGs bid to stay Biden's power plant climate rule WASHINGTON − The Supreme Court on Wednesday declined for now to halt the Biden administration’s plan to reduce climate-changing emissions from coal-fired power plants, despite having blocked a previous plan from the Obama administration.   The court rejected an emergency request
HomeTechnologyRevolutionary Smartphone App Transforms Your Device into a Full-Body Motion Capture Studio

Revolutionary Smartphone App Transforms Your Device into a Full-Body Motion Capture Studio

Engineers have created an innovative full-body motion capture system that doesn’t need specialized rooms, costly gear, cumbersome cameras, or a variety of sensors. Instead, it simply requires a smartphone, smartwatch, or earbuds.

Engineers from Northwestern University have introduced a groundbreaking system for capturing full-body motion that does not depend on specialized environments, expensive tools, large cameras, or multiple sensors.

All that is needed is an ordinary mobile device.

This new system, named MobilePoser, takes advantage of the sensors built into everyday mobile devices like smartphones, smartwatches, and wireless earbuds. By integrating sensor information, machine learning, and physics, MobilePoser can effectively track a person’s full-body pose and their movement in space in real-time.

“Operating in real time on mobile devices, MobilePoser achieves cutting-edge accuracy through advanced machine learning and physics-based optimization, opening up new opportunities in gaming, fitness, and indoor navigation without requiring specialized equipment,” said Karan Ahuja from Northwestern University, who spearheaded the research. “This technology represents a major advancement toward mobile motion capture, making immersive experiences more available and paving the way for creative applications across various fields.”

Ahuja’s team will present MobilePoser on October 15 at the 2024 ACM Symposium on User Interface Software and Technology in Pittsburgh. The session, titled “MobilePoser: Real-time full-body pose estimation and 3D human translation from IMUs in mobile consumer devices,” will be featured in a segment called “Poses as Input.”

Ahuja, an expert in human-computer interaction, serves as the Lisa Wissner-Slivka and Benjamin Slivka Assistant Professor of Computer Science at Northwestern’s McCormick School of Engineering, where he leads the Sensing, Perception, Interactive Computing and Experience (SPICE) Lab.

Challenges with current systems

Many movie enthusiasts are familiar with motion-capture methods, often revealed in behind-the-scenes footage. Actors don tight-fitting suits with sensors to create CGI characters, like Gollum in “Lord of the Rings” or the Na’vi in “Avatar,” and move around specifically designed spaces. A computer collects the sensor information to recreate the actor’s movements and expressions.

“This is the benchmark for motion capture, but it can cost more than $100,000 to set up,” Ahuja noted. “We aimed to create a user-friendly, accessible version that nearly anyone can utilize with the devices they already possess.”

Other motion-sensing technologies, like Microsoft Kinect, depend on fixed cameras to observe body movements. These systems work effectively if a person is within the camera’s range but are less effective for mobile or on-the-go purposes.

Estimating poses

To address these limitations, Ahuja’s team utilized inertial measurement units (IMUs), which incorporate a set of sensors—including accelerometers, gyroscopes, and magnetometers—to assess a body’s movements and orientation. Although these sensors are typically found in smartphones and similar devices, their accuracy for motion capture is generally insufficient. To improve their effectiveness, Ahuja’s team implemented a specialized multi-stage artificial intelligence (AI) algorithm that they trained using a comprehensive public dataset of synthesized IMU data derived from high-quality motion capture information.

With the information gathered from the sensors, MobilePoser learns about the acceleration and orientation of the body. It then processes this information using its AI algorithm to estimate body joint positions and rotations, walking speed and direction, as well as foot contact with the ground.

MobilePoser subsequently employs a physics-based optimizer to refine the predicted movements, ensuring they align with actual human body movements. For instance, joints cannot bend backward, and heads cannot turn in a full 360-degree rotation. The physics optimizer helps eliminate physically impossible motions from the captured data.

The system boasts a tracking error of only 8 to 10 centimeters. In comparison, the Microsoft Kinect has a tracking error of 4 to 5 centimeters, provided the user remains within the camera’s view. MobilePoser, however, allows users greater freedom to move around.

“The accuracy improves when a person uses more than one device, like a smartwatch on their wrist and a smartphone in their pocket,” Ahuja mentioned. “Nonetheless, a crucial feature of the system is its adaptability. If you don’t have your watch one day but your phone is available, it can still determine your full-body pose.”

Future applications

MobilePoser promises to provide gamers with richer experiences, but the app also opens new avenues in health and fitness. It goes beyond just counting steps, allowing users to monitor their overall posture to maintain proper form during workouts. Additionally, the app could assist healthcare providers in assessing patients’ mobility, activity levels, and gait. Ahuja envisions that the technology might also support indoor navigation, addressing a current limitation of GPS that is mainly effective outdoors.

“Currently, physicians use step counters to track patient mobility,” Ahuja remarked. “That’s rather disappointing, don’t you think? Our phones can give us the temperature in Rome, yet they know more about the outside world than about our bodies. We want our phones to be more than just smart pedometers; they should detect various activities, identify poses, and serve as more proactive assistants.”

To inspire further research on this topic, Ahuja’s team has made their pre-trained models, data pre-processing scripts, and training codes available as open-source software. They also announced that the app will soon be accessible for iPhone, AirPods, and Apple Watch.