Deleted Scene from DEADPOOL & WOLVERINE: Dr. Ant & The Quantumverse Of Madness

Marvel has just dropped the first deleted scene from Deadpool & Wolverine, and it totally should have been included in the movie! Titled “Elevator Ride,” the scene shows Wade Wilson getting his first crash course in the Multiverse, courtesy of Mr. Paradox at the TVA. As they step into an elevator, Paradox says: “Mr. Wilson, this may come as a shock to a narcissistic, blathering bag of meat like yourself, but your universe is not the only one in existence.”

Deadpool, never missing a beat, shoots back: “Oh please, you think I haven’t seen Doctor Ant and the Quantumverse of Madness?” Classic Wade.



CubeSats, the tiniest of satellites, are changing the way we explore the solar system

Mustafa Aksoy, University at Albany, State University of New York

Most CubeSats weigh less than a bowling ball, and some are small enough to hold in your hand. But the impact these instruments are having on space exploration is gigantic. CubeSats – miniature, agile and cheap satellites – are revolutionizing how scientists study the cosmos.

A standard-size CubeSat is tiny, about 4 pounds (roughly 2 kilograms). Some are larger, maybe four times the standard size, but others are no more than a pound.

As a professor of electrical and computer engineering who works with new space technologies, I can tell you that CubeSats are a simpler and far less costly way to reach other worlds.

Rather than carry many instruments with a vast array of purposes, these Lilliputian-size satellites typically focus on a single, specific scientific goal – whether discovering exoplanets or measuring the size of an asteroid. They are affordable throughout the space community, even to small startup, private companies and university laboratories.

Tiny satellites, big advantages

CubeSats’ advantages over larger satellites are significant. CubeSats are cheaper to develop and test. The savings of time and money means more frequent and diverse missions along with less risk. That alone increases the pace of discovery and space exploration.

CubeSats don’t travel under their own power. Instead, they hitch a ride; they become part of the payload of a larger spacecraft. Stuffed into containers, they’re ejected into space by a spring mechanism attached to their dispensers. Once in space, they power on. CubeSats usually conclude their missions by burning up as they enter the atmosphere after their orbits slowly decay.

Case in point: A team of students at Brown University built a CubeSat in under 18 months for less than US$10,000. The satellite, about the size of a loaf of bread and developed to study the growing problem of space debris, was deployed off a SpaceX rocket in May 2022.

A CubeSat can go from whiteboard to space in less than a year.

Smaller size, single purpose

Sending a satellite into space is nothing new, of course. The Soviet Union launched Sputnik 1 into Earth orbit back in 1957. Today, about 10,000 active satellites are out there, and nearly all are engaged in communications, navigation, military defense, tech development or Earth studies. Only a few – less than 3% – are exploring space.

That is now changing. Satellites large and small are rapidly becoming the backbone of space research. These spacecrafts can now travel long distances to study planets and stars, places where human explorations or robot landings are costly, risky or simply impossible with the current technology.

But the cost of building and launching traditional satellites is considerable. NASA’s lunar reconnaissance orbiter, launched in 2009, is roughly the size of a minivan and cost close to $600 million. The Mars reconnaissance orbiter, with a wingspan the length of a school bus, cost more than $700 million. The European Space Agency’s solar orbiter, a 4,000-pound (1,800-kilogram) probe designed to study the Sun, cost $1.5 billion. And the Europa Clipper – the length of a basketball court and scheduled to launch in October 2024 to the Jupiter moon Europa – will ultimately cost $5 billion.

These satellites, relatively large and stunningly complex, are vulnerable to potential failures, a not uncommon occurrence. In the blink of an eye, years of work and hundreds of millions of dollars could be lost in space.

Two scientists wearing masks, gloves, head coverings and white clean suits work on an instrument in a laboratory.
NASA scientists prep the ASTERIA spacecraft for its April 2017 launch. NASA/JPL-Caltech

Exploring the Moon, Mars and the Milky Way

Because they are so small, CubeSats can be released in large numbers in a single launch, further reducing costs. Deploying them in batches – known as constellations – means multiple devices can make observations of the same phenomena.

For example, as part of the Artemis I mission in November 2022, NASA launched 10 CubeSats. The satellites are now trying to detect and map water on the Moon. These findings are crucial, not only for the upcoming Artemis missions but to the quest to sustain a permanent human presence on the lunar surface. The CubeSats cost $13 million.

The MarCO CubeSats – two of them – accompanied NASA’s Insight lander to Mars in 2018. They served as a real-time communications relay back to Earth during Insight’s entry, descent and landing on the Martian surface. As a bonus, they captured pictures of the planet with wide-angle cameras. They cost about $20 million.

CubeSats have also studied nearby stars and exoplanets, which are worlds outside the solar system. In 2017, NASA’s Jet Propulsion Laboratory deployed ASTERIA, a CubeSat that observed 55 Cancri e, also known as Janssen, an exoplanet eight times larger than Earth, orbiting a star 41 light years away from us. In reconfirming the existence of that faraway world, ASTERIA became the smallest space instrument ever to detect an exoplanet.

Two more notable CubeSat space missions are on the way: HERA, scheduled to launch in October 2024, will deploy the European Space Agency’s first deep-space CubeSats to visit the Didymos asteroid system, which orbits between Mars and Jupiter in the asteroid belt.

And the M-Argo satellite, with a launch planned for 2025, will study the shape, mass and surface minerals of a soon-to-be-named asteroid. The size of a suitcase, M-Argo will be the smallest CubeSat to perform its own independent mission in interplanetary space.

The swift progress and substantial investments already made in CubeSat missions could help make humans a multiplanetary species. But that journey will be a long one – and depends on the next generation of scientists to develop this dream.The Conversation

Mustafa Aksoy, Assistant Professor of Electrical & Computer Engineering, University at Albany, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Today’s Hottest Deals: Meta Quest 3 512GB VR Headset, LG OLED evo C4 Series 55 Inch 4K TV, 25th Anniversary Ghost Face Mask, and MORE!

LG OLED TV Deal

For today’s edition of “Deal of the Day,” here are some of the best deals we stumbled on while browsing the web this morning! Please note that Geeks are Sexy might get a small commission from qualifying purchases done through our posts. As an Amazon Associate, I earn from qualifying purchases.

Blink Outdoor 4 one-camera system + Amazon Echo Show 5 Smart Display$189.98 $59.99 (For Prime Members) (The Echo Show 5 Alone is worth $89.99!)

Meta Quest 3 512GB— Breakthrough Mixed Reality Headset (Batman: Arkham Shadow and a 3-month trial of Meta Quest+ included)$649.00 $499.00

Logitech G432 Wired Gaming Headset, 7.1 Surround Sound, DTS Headphone:X 2.0, Flip-to-Mute Mic$79.99 $38.49

Lenovo 300 USB Combo, Full-Size Wired Keyboard & Mouse$19.99 $9.30

LG OLED evo C4 Series 55 Inch 4K TV$1,999.99 $1,278.95

Wall Outlet Extender with Shelf and Night Light, Surge Protector$18.99 $11.69 (Click “Redeem” on the 2 Coupons)

Officially Licensed 25th Anniversary Ghost Face Mask – Great way to easily celebrate Halloween! Just throw on with a Black outfit and you’re good to go! – $29.99 $14.72

Microsoft Office Pro 2021 for Windows: Lifetime License$219.99 $34.97

EaseUS Data Recovery Wizard: Lifetime Subscription$149.95 $49.99

1minAI: Lifetime Subscription – Why choose between ChatGPT, Midjourney, GoogleAI, and MetaAI when you could get them all in one tool? – $234.00 $39.99

Dame Maggie Smith, Our Beloved Professor McGonagall, Has Died

Maggie Smith as Professor Minerva McGonagall

The Wizarding World has lost a true legend. Dame Maggie Smith, the brilliant actress behind the beloved Professor Minerva McGonagall, has passed away at the age of 89. Known to Muggles and wizards alike for her sharp wit and commanding presence, Smith brought Hogwarts’ transfiguration teacher to life across seven of the eight Harry Potter films, forever cementing her place in the hearts of fans.

Her children, Chris Larkin and Toby Stephens, confirmed that she passed peacefully in hospital, surrounded by family and close friends. In a heartfelt statement, they shared, “An intensely private person, she was with friends and family at the end. She leaves behind two sons and five loving grandchildren who are devastated by the loss of their extraordinary mother and grandmother.”

While Harry Potter fans will always remember her for Professor McGonagall’s stern yet kind-hearted guidance, Maggie Smith’s 70-year career spanned far beyond Hogwarts. She was also celebrated for her role as the sharp-tongued Dowager Countess in Downton Abbey, earning her two Academy Awards and legions of fans across generations.

Rest in peace, Dame Maggie Smith – may your next great adventure be as enchanting as the world you helped bring to life.

[Via BBC]

OpenAI’s Strawberry program is reportedly capable of reasoning. It might be able to deceive humans

Shweta Singh, Warwick Business School, University of Warwick

OpenAI, the company that made ChatGPT, has launched a new artificial intelligence (AI) system called Strawberry. It is designed not just to provide quick responses to questions, like ChatGPT, but to think or “reason”.

This raises several major concerns. If Strawberry really is capable of some form of reasoning, could this AI system cheat and deceive humans?

OpenAI can program the AI in ways that mitigate its ability to manipulate humans. But the company’s own evaluations rate it as a “medium risk” for its ability to assist experts in the “operational planning of reproducing a known biological threat” – in other words, a biological weapon. It was also rated as a medium risk for its ability to persuade humans to change their thinking.

It remains to be seen how such a system might be used by those with bad intentions, such as con artists or hackers. Nevertheless, OpenAI’s evaluation states that medium-risk systems can be released for wider use – a position I believe is misguided.

Strawberry is not one AI “model”, or program, but several – known collectively as o1. These models are intended to answer complex questions and solve intricate maths problems. They are also capable of writing computer code – to help you make your own website or app, for example.

An apparent ability to reason might come as a surprise to some, since this is generally considered a precursor to judgment and decision making – something that has often seemed a distant goal for AI. So, on the surface at least, it would seem to move artificial intelligence a step closer to human-like intelligence.

When things look too good to be true, there’s often a catch. Well, this set of new AI models is designed to maximise their goals. What does this mean in practice? To achieve its desired objective, the path or the strategy chosen by AI may not always necessarily be fair, or align with human values.

True intentions

For example, if you were to play chess against Strawberry, in theory, could its reasoning allow it to hack the scoring system rather than figure out the best strategies for winning the game?

The AI might also be able to lie to humans about its true intentions and capabilities, which would pose a serious safety concern if it were to be deployed widely. For example, if the AI knew it was infected with malware, could it “choose” to conceal this fact in the knowledge that a human operator might opt to disable the whole system if they knew?

AI chatbot icons
Strawberry goes a step beyond the capabilities of AI chatbots. Robert Way / Shutterstock

These would be classic examples of unethical AI behaviour, where cheating or deceiving is acceptable if it leads to a desired goal. It would also be quicker for the AI, as it wouldn’t have to waste any time figuring out the next best move. It may not necessarily be morally correct, however.

This leads to a rather interesting yet worrying discussion. What level of reasoning is Strawberry capable of and what could its unintended consequences be? A powerful AI system that’s capable of cheating humans could pose serious ethical, legal and financial risks to us.

Such risks become grave in critical situations, such as designing weapons of mass destruction. OpenAI rates its own Strawberry models as “medium risk” for their potential to assist scientists in developing chemical, biological, radiological and nuclear weapons.

OpenAI says: “Our evaluations found that o1-preview and o1-mini can help experts with the operational planning of reproducing a known biological threat.” But it goes on to say that experts already have significant expertise in these areas, so the risk would be limited in practice. It adds: “The models do not enable non-experts to create biological threats, because creating such a threat requires hands-on laboratory skills that the models cannot replace.”

Powers of persuasion

OpenAI’s evaluation of Strawberry also investigated the risk that it could persuade humans to change their beliefs. The new o1 models were found to be more persuasive and more manipulative than ChatGPT.

OpenAI also tested a mitigation system that was able to reduce the manipulative capabilities of the AI system. Overall, Strawberry was labelled a medium risk for “persuasion” in Open AI’s tests.

Strawberry was rated low risk for its ability to operate autonomously and on cybersecurity.

Open AI’s policy states that “medium risk” models can be released for wide use. In my view, this underestimates the threat. The deployment of such models could be catastrophic, especially if bad actors manipulate the technology for their own pursuits.

This calls for strong checks and balances that will only be possible through AI regulation and legal frameworks, such as penalising incorrect risk assessments and the misuse of AI.

The UK government stressed the need for “safety, security and robustness” in their 2023 AI white paper, but that’s not nearly enough. There is an urgent need to prioritise human safety and devise rigid scrutiny protocols for AI models such as Strawberry.The Conversation

Shweta Singh, Assistant Professor, Information Systems and Management, Warwick Business School, University of Warwick

This article is republished from The Conversation under a Creative Commons license. Read the original article.