America’s Housing Crisis: The Impact of Mass Deportations on an Already Strained Market

The U.S. is short millions of housing units. Mass deportations would make it worse. As Donald Trump prepares to take office and implement one of his key campaign promises, deporting immigrants, one question that's been asked is how it will impact the housing market. Housing of all kinds is in short supply. One of the
HomeTechnologyRevolutionary Security Protocol Safeguards Data in Cloud Computing from Cyber Threats

Revolutionary Security Protocol Safeguards Data in Cloud Computing from Cyber Threats

Researchers have created a method that ensures data remains secure during multiparty computations in the cloud. By utilizing the quantum properties of light, this technique could allow organizations such as hospitals and financial institutions to securely analyze sensitive patient or customer information using deep learning.

Deep learning models are being utilized across various sectors, including healthcare diagnostics and financial predictions. Nevertheless, these models demand significant computational power, which is often provided by robust cloud servers.

This dependence on cloud technology introduces major security challenges, especially in healthcare, where hospitals may be reluctant to employ AI for analyzing sensitive patient information due to concerns about privacy.

To address this critical issue, researchers at MIT have created a security protocol that harnesses the quantum properties of light, assuring that data communicated to and from a cloud server remains safeguarded during deep-learning tasks.

By encoding data into the laser light used in fiber optic communication systems, the protocol takes advantage of the core principles of quantum mechanics, rendering it impossible for intruders to copy or intercept information without being detected.

Furthermore, this method guarantees security without reducing the accuracy of deep-learning models. Tests showed that the protocol maintained an impressive 96 percent accuracy while implementing rigorous security measures.

“Deep learning models like GPT-4 are incredibly powerful but require substantial computational resources. Our protocol allows users to utilize these advanced models without compromising the confidentiality of their data or the proprietary aspects of the models themselves,” said Kfir Sulimany, a postdoctoral researcher at MIT and the paper’s lead author regarding this security protocol.

Sulimany collaborated with Sri Krishna Vadlamani, another postdoc at MIT; Ryan Hamerly, a former postdoc now affiliated with NTT Research, Inc.; Prahlad Iyengar, a graduate student in electrical engineering and computer science (EECS); and senior author Dirk Englund, a professor in EECS and lead researcher of the Quantum Photonics and Artificial Intelligence Group. The research findings were recently shared at the Annual Conference on Quantum Cryptography.

A two-way street for security in deep learning

The situation the researchers examined revolves around two parties: a client containing confidential data, like medical images, and a central server that utilizes a deep learning model.

The client wishes to employ the deep-learning model to make predictions, such as determining whether a patient has cancer based on medical imaging, while ensuring no information about the patient is revealed.

In this context, sensitive data must be transmitted to generate predictions while keeping patient information secure.

Simultaneously, the server aims to protect its proprietary model, which may have cost a company like OpenAI years and millions of dollars to develop.

“Both parties possess information they want to keep confidential,” Vadlamani notes.

In standard digital computing, a malicious entity could easily duplicate data sent between the server and the client.

However, quantum information cannot be perfectly replicated. The researchers utilize this property, known as the no-cloning principle, in their security protocol.

In their approach, the server encodes the weights of a deep neural network into an optical field using laser light.

A neural network is a type of deep learning model featuring layers of interconnected nodes, or neurons, that perform computations on data. The weights are the elements of the model that carry out mathematical operations on each input, layer by layer. The output from one layer is fed into the next until the final layer produces a prediction.

The server sends the weights of the network to the client, which executes operations based on its private data while keeping the data hidden from the server.

Concurrently, the security protocol restricts the client to measuring only a single result and prevents them from copying the weights due to the quantum nature of light.

As soon as the client inputs the first result into the next layer, the protocol is designed to eliminate information from the first layer, preventing the client from gaining further insights into the model.

“Instead of measuring the entire stream of light from the server, the client measures only what is essential to operate the deep neural network and feed the result into the next layer. The remaining light is sent back to the server for security assessments,” explains Sulimany.

Because of the no-cloning theorem, the client inadvertently introduces slight errors to the model during the measurement of its result. When the server receives the leftover light from the client, it can analyze these errors to see if any information was compromised. Importantly, this leftover light does not expose any details about the client’s data.

A practical protocol

Today’s telecommunications infrastructure commonly employs optical fibers to transfer data due to the necessity for high bandwidth across extensive distances. As this technology already incorporates optical lasers, the researchers can embed data into light for their security protocol without needing additional hardware.

When testing their method, the researchers confirmed it could ensure security for both the server and client while allowing the deep neural network to achieve 96 percent accuracy.

The slight leakage of information regarding the model when the client performs operations is less than 10 percent of what a malicious individual would need to uncover any hidden details. Conversely, if a malicious server attempts to access the client’s data, they would acquire only about 1 percent of the information needed to do so.

“You can trust that security is maintained both ways — from the client to the server and from the server to the client,” says Sulimany.

“A few years back, when we demonstrated distributed machine learning inference between MIT’s main campus and the MIT Lincoln Laboratory, it became clear to me that we could innovate to offer physical-layer security, building on previous quantum cryptography research conducted at that testbed,” explains Englund. “However, numerous deep theoretical obstacles had to be addressed to determine whether we could actually implement privacy-assured distributed machine learning. This became feasible only after Kfir joined our team because he uniquely understood both the theoretical and experimental aspects needed to develop the comprehensive framework for this endeavor.”

Looking ahead, the researchers plan to explore how their protocol might be applied to federated learning, where various parties contribute data to develop a centralized deep-learning model. It may also be viable in quantum operations, rather than the classical methods examined for this study, potentially yielding enhancements in both accuracy and security.

This research was supported in part by the Israeli Council for Higher Education and the Zuckerman STEM Leadership Program.