Singularity Hub https://singularityhub.com/ News and Insights on Technology, Science, and the Future from Singularity Group Wed, 25 Jan 2023 19:10:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.2 https://singularityhub.com/wp-content/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Singularity Hub https://singularityhub.com/ 32 32 4183809 CRISPR’s Wild First Decade Only Scratches the Surface of Its Potential https://singularityhub.com/2023/01/25/crisprs-wild-first-decade-only-scratches-the-surface-of-its-potential/ Wed, 25 Jan 2023 19:11:27 +0000 https://singularityhub.com/?p=150304 Ten years ago, a little-known bacterial defense mechanism skyrocketed to fame as a powerful genome editor. In the decade since, CRISPR-Cas9 has spun off multiple variants, expanding into a comprehensive toolbox that can edit the genetic code of life.

Far from an ivory tower pursuit, its practical uses in research, healthcare, and agriculture came fast and furious.

You’ve seen the headlines. The FDA approved its use in tackling the underlying genetic mutation for sickle cell disease. Some researchers edited immune cells to fight untreatable blood cancers in children. Others took pig-to-human organ transplants from dream to reality in an attempt to alleviate the shortage of donor organs. Recent work aims to help millions of people with high cholesterol—and potentially bring CRISPR-based gene therapy to the masses—by lowering their chances of heart disease with a single injection.

But to Dr. Jennifer Doudna, who won the Nobel Prize in 2020 for her role in developing CRISPR, we’re just scratching the surface of its potential. Together with graduate student Joy Wang, Doudna laid out a roadmap for the technology’s next decade in an article in Science.

If the 2010s were focused on establishing the CRISPR toolbox and proving its effectiveness, this decade is when the technology reaches its full potential. From CRISPR-based therapies and large-scale screens for disease diagnostics to engineering high-yield crops and nutritious foods, the technology “and its potential impact are still in their early stages,” the authors wrote.

A Decade of Highlights

We’ve spilt plenty of ink on CRISPR advances, but it pays to revisit the past to predict the future—and potentially scout out problems along the way.

One early highlight was CRISPR’s incredible ability to rapidly engineer animal models of disease. Its original form easily snips away a targeted gene in a very early embryo, which when transplanted into a womb can generate genetically modified mice in just a month, compared to a year using previous methods. Additional CRISPR versions, such as base editing—swapping one genetic letter for another—and prime editing—which snips the DNA without cutting both strands—further boosted the toolkit’s flexibility at engineering genetically-altered organoids (think mini-brains) and animals. CRISPR rapidly established dozens of models for some of our most devasting and perplexing diseases, including various cancers, Alzheimer’s, and Duchenne muscular dystrophy—a degenerative disorder in which the muscle slowly wastes away. Dozens of CRISPR-based trials are now in the works.

CRISPR also accelerated genetic screening into the big data age. Rather than targeting one gene at a time, it’s now possible to silence, or activate, thousands of genes in parallel, forming a sort of Rosetta stone for translating genetic perturbations into biological changes. This is especially important for understanding genetic interactions, such as those in cancer or aging that we weren’t previously privy to, and gaining new ammunition for drug development.

But a crowning achievement for CRISPR was multiplexed editing. Like simultaneously tapping on multiple piano keys, this type of genetic engineering targets multiple specific DNA areas, rapidly changing a genome’s genetic makeup in one go.

The technology works in plants and animals. For eons, people have painstakingly bred crops with desirable features—be it color, size, taste, nutrition, or disease resilience. CRISPR can help select for multiple traits or even domesticate new crops in just one generation. CRISPR-generated hornless bulls, nutrient rich tomatoes, and hyper-muscular farm animals and fish are already reality. With the world population hitting 8 billion in 2022 and millions suffering from hunger, CRISPRed-crops may lend a lifeline—that is, if people are willing to accept the technology.

The Path Forward

Where do we go from here?

To the authors, we need to further boost CRISPR’s effectiveness and build trust. This means going back to the basics to increase the tool’s editing accuracy and precision. Here, platforms to rapidly evolve Cas enzymes, the “scissor” component of the CRISPR machinery, are critical.

There have already been successes: one Cas version, for example, acts as a guardrail for the targeting component—the sgRNA “bloodhound.” In classic CRISPR, the sgRNA works alone, but in this updated version, it struggles to bind without Cas assistance. This trick helps tailor the edit to a specific DNA site and increases accuracy so the cut works as predicted.

Similar strategies can also boost precision with fewer side effects or insert new genes in cells such as neurons and others that no longer divide. While already possible with prime editing, its efficiency can be 30 times lower than classic CRISPR mechanisms.

“A main goal for prime editing in the next decade is improving efficiency without compromising editing product purity—an outcome that has the potential to turn prime editing into one of the most versatile tools for precision editing,” the authors said.

But perhaps more important is delivery, which remains a bottleneck especially for therapeutics. Currently, CRISPR is generally used on cells outside the body that are infused back—as in the case of CAR-T—or in some cases, tethered to a viral carrier or encapsulated in fatty bubbles and injected into the body. There have been successes: in 2021, the FDA approved the first CRISPR-based shot to tackled a genetic blood disease, transthyretin amyloidosis.

Yet both strategies are problematic: not many types of cells can survive the CAR-T treatment—dying when reintroduced into the body—and targeting specific tissues and organs remains mostly out of reach for injectable therapies.

A key advance for the next decade, the authors said, is to shuttle the CRISPR cargo into the targeted tissue without harm and release the gene editor at its intended spot. Each of these steps, though seemingly simple on paper, presents its own set of challenges that will require both bioengineering and innovation to overcome.

Finally, CRISPR can synergize with other technological advances, the authors said. For example, by tapping into cell imaging and machine learning, we could soon engineer even more efficient genome editors. Thanks to faster and cheaper DNA sequencing, we can then easily monitor gene-editing consequences. These data can then provide a kind of feedback mechanism with which to engineer even more powerful genome editors in a virtuous loop.

Real-World Impact

Although further expanding the CRISPR toolbox is on the agenda, the technology is sufficiently mature to impact the real world in its second decade, the authors said.

In the near future, we should see “an increased number of CRISPR-based treatments moving to later stages of clinical trials.” Looking further ahead, the technology, or its variants, could make pig-to-human organ xenotransplants routine, rather than experimental. Large-scale screens for genes that lead to aging or degenerative brain or heart diseases—our top killers today—could yield prophylactic CRISPR-based treatments. It’s no easy task: we need both knowledge of the genetics underlying multifaceted genetic diseases—that is, when multiple genes come into play—and a way to deliver the editing tools to their target. “But the potential benefits may drive innovation in these areas well beyond what is possible today,” the authors said.

Yet with greater power comes greater responsibility. CRISPR has advanced at breakneck speed, and regulatory agencies and the public are still struggling to catch up. Perhaps the most notorious example was that of the CRISPR babies, where experiments carried out against global ethical guidelines propelled an international consortium to lay down a red line for human germ-cell editing.

Similarly, genetically modified organisms (GMOs) remain a controversial topic. Although CRISPR is far more precise than previous genetic tools, it’ll be up to consumers to decide whether to welcome a new generation of human-evolved foods—both plant and animal.

These are important conversations that need global discourse as CRISPR enters its second decade. But to the authors, the future looks bright.

“Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology,” they said. “By continuing to explore the natural world, we will discover what cannot be imagined and put it to real-world use for the benefit of the planet.”

Image Credit: NIH

]]>
150304
Electric Vehicle Batteries Could Meet Grid-Scale Storage Needs by 2030 https://singularityhub.com/2023/01/23/electric-vehicle-batteries-could-meet-grid-scale-storage-needs-by-2030/ Mon, 23 Jan 2023 15:00:59 +0000 https://singularityhub.com/?p=150217 Boosting the role of renewables in our electricity supply will require a massive increase in grid-scale energy storage. But new research suggests that electric vehicle batteries could meet short-term storage demands by as soon as 2030.

While solar and wind are rapidly becoming the cheapest source of electricity in many parts of the world, their intermittency is a significant problem. One potential solution is to use batteries to store energy for times when the sun doesn’t shine and the wind doesn’t blow, but building enough capacity to serve entire power grids would be enormously costly.

That’s why people have suggested making use of the huge number of batteries being installed in the ever-growing global fleet of electric vehicles. The idea is that when they’re not on the road, utilities could use these batteries to store excess energy and draw from it when demand spikes.

While there have been some early pilots, so far it has been unclear whether the idea really has legs. Now, a new economic analysis led by researchers at Leiden University in the Netherlands suggests that electric vehicle batteries could play a major role in grid-scale storage in the relatively near future.

There are two main ways that these batteries could aid the renewables transition, according to the team’s study published in Nature Communications. Firstly, so-called vehicle-to-grid technology could make it possible to do smart vehicle charging, only charging cars when power demand is low. It could also make it possible for vehicle owners to temporarily store electricity for utilities for a price.

But old car batteries could also make a significant contribution. Their capacity declines over repeated charge and discharge cycles, and batteries typically become unsuitable for use in electric vehicles by the time they drop to 70 to 80 percent of their original capacity. That’s because they can no longer hold enough power to make up for their added weight. Weight isn’t a problem for grid-scale storage though, so these car batteries can be repurposed.

The researchers note that the lithium-ion batteries used in cars are probably only suitable for short-term storage of under four hours, but this accounts for most of the projected demand. So far though, there hasn’t been a comprehensive study of how large a contribution both current and retired electric vehicle batteries could play in the future of the grid.

To try and fill that gap, the researchers combined data on how many batteries are estimated to be produced over the coming years, how quickly batteries will degrade based on local conditions, and how electric vehicles are likely to be used in different countries—for instance, how many miles people drive in a day and how often they charge.

They found that the total available storage capacity from these two sources by 2050 was likely to be between 32 and 62 terawatt-hours. The authors note that this is significantly higher than the 3.4 to 19.2 terawatt-hours the world is predicted to need by 2050, according to the International Renewable Energy Agency and research group Storage Lab.

However, not every electric vehicle owner is likely to participate in vehicle-to-grid schemes and not all batteries will get repurposed at the end of their lives. So the researchers investigated how different participation rates would impact the ability of electric vehicle batteries to contribute to grid storage.

They found that to meet global demand by 2050, only between 12 and 43 percent of vehicle owners would need to take part in vehicle to grid schemes. If only half of secondhand batteries are used for grid storage, the required participation rates would drop to just 10 percent. In the most optimistic scenarios, electric vehicle batteries could meet demand by 2030.

Lots of factors will impact whether or not this could ever be achieved, including things like how quickly vehicle-to-grid infrastructure can be rolled out, how easy it is to convince vehicle owners to take part, and the economics of recycling car batteries at the end of their lives. The authors note that governments can and should play a role in incentivizing participation and mandating the reuse of old batteries.

But either way, the results suggest there may be a promising alternative to a costly and time-consuming rollout of dedicated grid storage. Electric vehicle owners may soon be doing their part for the environment twice over.

Image Credit: Shutterstock.com/Roman Zaiets

]]>
150217
Google Scrambles to Catch Up in the Wake of OpenAI’s ChatGPT https://singularityhub.com/2023/01/22/google-scrambles-to-catch-up-in-the-wake-of-openais-chatgpt/ Sun, 22 Jan 2023 18:30:25 +0000 https://singularityhub.com/?p=150220 Google is one of the biggest companies on Earth. Google’s search engine is the front door to the internet. And according to recent reports, Google is scrambling.

Late last year, OpenAI, an artificial intelligence company at the forefront of the field, released ChatGPT. Alongside Elon Musk’s Twitter acquisition and fallout from FTX’s crypto implosion, breathless chatter about ChatGPT and generative AI has been ubiquitous.

The chatbot, which was born from an upgrade to OpenAI’s GPT-3 algorithm, is like a futuristic Q&A machine. Ask any question, and it responds in plain language. Sometimes it gets the facts straight. Sometimes not so much. Still, ChatGPT took the world by storm thanks to the fluidity of its prose, its simple interface, and a mainstream launch.

When a new technology hits public consciousness, people try to sort out its impact. Between debates about how bots like ChatGPT will impact everything from academics to journalism, not a few folks have suggested ChatGPT may end Google’s reign in search. Who wants to hunt down information fragmented across a list of web pages when you could get a coherent, seemingly authoritative, answer in an instant?

In December, The New York Times reported Google was taking the prospect seriously, with management declaring a “code red” internally. This week, as Google announced layoffs, CEO Sundar Pichai told employees the company will sharpen its focus on AI. The NYT also reported Google founders, Larry Page and Sergey Brin, are now involved in efforts to streamline development of AI products. The worry is that they’ve lost a step to the competition.

If true, it isn’t due to a lack of ability or vision. Google’s no slouch at AI.

The technology here—a flavor of deep learning model called a transformer—was developed at Google in 2017. The company already has its own versions of all the flashy generative AI models, from images (Imagen) to text (LaMDA). Indeed, in 2021, Google researchers published a paper pondering how large language models (like ChatGPT) might radically upend search in the future.

“What if we got rid of the notion of the index altogether and replaced it with a pre-trained model that efficiently and effectively encodes all of the information contained in the corpus?” Donald Metzler, a Google researcher, and coauthors wrote at the time. “What if the distinction between retrieval and ranking went away and instead there was a single response generation phase?” This should sound familiar.

Whereas smaller organizations opened access to their algorithms more aggressively, however, Google largely kept its work under wraps. Offering only small, tightly controlled demos to limited groups of people, it deemed the tech too risky and error-prone for wider release just yet. Damage to its brand and reputation was a chief concern.

Now, sweating it out under the bright lights of ChatGPT, the company is planning to release some 20 AI-powered products later this year, according to the NYT. These will encompass all the top generative AI applications, like image, text, and code generation—and they’ll test a ChatGPT-like bot in search.

But is the technology ready to go from splashy demo tested by millions to a crucial tool trusted by billions? In their 2021 paper, the Google researchers suggested an ideal chatbot search assistant would be authoritative, transparent, unbiased, accessible, and contain diverse perspectives. Acing each of those categories is still a stretch for even the most advanced large language models.

Trust matters with search in particular. When it serves up a list of web pages today, Google can blame content creators for poor quality and vow to serve better results in the future. With an AI chatbot, it is the content creator.

As Fast Company’s Harry McCracken pointed out not long ago, if ChatGPT can’t get its facts straight, nothing else matters. “Whenever I chat with ChatGPT about any subject I know much about, such as the history of animation, I’m most struck by how deeply untrustworthy it is,” McCracken wrote. “If a rogue software engineer set out to poison our shared corpus of knowledge by generating convincing-sounding misinformation in bulk, the end result might look something like this.”

Google is clearly aware of the risk. And whatever implementation in search it unveils this year, it still aims to prioritize “getting the facts right, ensuring safety, and getting rid of misinformation.” How it will accomplish these goals is an open question. Just in terms of “ensuring safety,” for example, Google’s algorithms underperform OpenAI’s on metrics of toxicity, according to the NYT. But a Time investigation this week reported that OpenAI had to turn, at least in part, to human workers in Kenya, paid a pittance, to flag and scrub the most toxic data from ChatGPT.

Other questions, including about the copyright of works used to train generative algorithms, remain similarly unresolved. Two copyright lawsuits, one by Getty Images and one by a group of artists, were filed earlier this week.

Still, the competitive landscape, it seems, is compelling Google, Microsoft—who has invested big in OpenAI and is already incorporating its algorithms into products—and others to go full steam ahead in an effort to minimize the risk of being left behind. We’ll have to wait and see what an implementation in search looks like. Maybe it’ll be in beta with a disclaimer for awhile, or maybe, as the year progresses, the tech will again surprise us with breakthroughs.

In either case, while generative AI will play a role in search, how much of a role and how soon is less settled. As to whether Google loses its perch? OpenAI’s CEO, Sam Altman, pushed back against the hype this week.

“I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong,” Altman said in response to a question about the likelihood ChatGPT dethrones Google. “I think people forget they get to make a countermove here, and they’re like pretty smart, pretty competent. I do think there’s a change for search that will probably come at some point—but not as dramatically as people think in the short term.”

Image Credit: D21_Gallery / Unsplash

]]>
150220
This Week’s Awesome Tech Stories From Around the Web (Through January 21) https://singularityhub.com/2023/01/21/this-weeks-awesome-tech-stories-from-around-the-web-through-january-21/ Sat, 21 Jan 2023 15:00:22 +0000 https://singularityhub.com/?p=150194 ARTIFICIAL INTELLIGENCE

What Happens When AI Has Read Everything?
Ross Andersen | The Atlantic
“Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.”

GENE EDITING

Next Up for CRISPR: Gene Editing for the Masses?
Jessica Hamzelou | MIT Technology Review
We know the basics of healthy living by now. A balanced diet, regular exercise, and stress reduction can help us avoid heart disease—the world’s biggest killer. But what if you could take a vaccine, too? And not a typical vaccine—one shot that would alter your DNA to provide lifelong protection? That vision is not far off, researchers say. Advances in gene editing, and CRISPR technology in particular, may soon make it possible.

ETHICS

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
Billy Perrigo | Time
“ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world’s most valuable AI companies. But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.”

ROBOTICS

Boston Dynamics’ Atlas Robot Grows a Set of Hands, Attempts Construction Work
Ron Amadeo | Ars Technica
“Atlas isn’t just clumsily picking things up and carrying them, though. It’s running, jumping, and spinning while carrying heavy objects. At one point it jumps and throws the heavy toolbox up to its construction partner, all without losing balance. It’s doing all this on rickety scaffolding and improvised plank walkways, too, so the ground is constantly moving under Atlas’ feet with every step. Picking up stuff is the start of teaching the robot to do actual work, and it looks right at home on a rough-and-tumble construction site.”

BIOTECH

These Scientists Used CRISPR to Put an Alligator Gene Into Catfish
Jessica Hamzelou | MIT Technology Review
“Millions of fish are farmed in the US every year, but many of them die from infections. In theory, genetically engineering fish with genes that protect them from disease could reduce waste and help limit the environmental impact of fish farming. A team of scientists have attempted to do just that—by inserting an alligator gene into the genomes of catfish.”

3D PRINTING

Can 3D Printing Help Solve the Housing Crisis?
Rachel Monroe | The New Yorker
“Until last year, Icon, one of the biggest and best-funded companies in the field, had printed fewer than two dozen houses, most of them essentially test cases. But, when I met Ballard, the company had recently announced a partnership with Lennar, the second-largest home-builder in the United States, to print a hundred houses in a development outside Austin. A lot was riding on the project, which would be a test of whether the technology was ready for the mainstream.”

FUTURE

1923 Cartoon Eerily Predicted 2023’s AI Art Generators
Benj Edwards | Ars Technica
“[The vintage cartoon] depicts a cartoonist standing by his drawing table and making plans for social events while an ‘idea dynamo’ generates ideas and a ‘cartoon dynamo’ renders the artwork. Interestingly, this separation of labor feels similar to our neural networks of today. In the actual 2023, the ‘idea dynamo’ would likely be a large language model like GPT-3 (albeit imperfectly), and the ‘cartoon dynamo’ is most similar to an image-synthesis model like Stable Diffusion.”

TECH

OpenAI CEO Sam Altman on GPT-4: ‘People Are Begging to Be Disappointed and They Will Be’
James Vincent | The Verge
“GPT-3 came out in 2020, and an improved version, GPT 3.5, was used to create ChatGPT. The launch of GPT-4 is much anticipated, with more excitable members of the AI community and Silicon Valley world already declaring it to be a huge leap forward. …’The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,’ said the OpenAI CEO. ‘People are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.’i

COMPUTING

Are We Living in a Computer Simulation, and Can We Hack It?
Dennis Overbye | The New York Times
“If you could change the laws of nature, what would you change? Maybe it’s that pesky speed-of-light limit on cosmic travel—not to mention war, pestilence and the eventual asteroid that has Earth’s name on it. Maybe you would like the ability to go back in time— to tell your teenage self how to deal with your parents, or to buy Google stock. Couldn’t the universe use a few improvements?”

Image Credit: Victor Crespo / Unsplash

]]>
150194
Affordable Cultured Meat Is a Step Closer With New Approval https://singularityhub.com/2023/01/20/affordable-cultured-meat-is-a-step-closer-with-new-approval/ Fri, 20 Jan 2023 15:00:51 +0000 https://singularityhub.com/?p=150209 In 2020, California-based Good Meat became the first company in the world to start selling lab-grown meat. Its cultured chicken has been on the market in Singapore since then, and though it’s still awaiting FDA approval to sell its products in the US, this week the company reached another milestone when it received approval to sell serum-free meat in Singapore.

The approval was granted by the Singapore Food Agency, and means Good Meat is allowed to use synthetic processes to create its products.

Cultured meat is grown from animal cells and is biologically the same as meat that comes from an animal. The process starts with harvesting muscle cells from an animal, then feeding those cells a mixture of nutrients and naturally-occurring growth factors (or, as Good Meat’s process specifies, amino acids, fats, and vitamins) so that they multiply, differentiate, then grow to form muscle tissue, in much the same way muscle grows inside animals’ bodies.

Usually, getting animal cells to duplicate requires serum. One of the more common is fetal bovine serum, which is made from the blood of fetuses extracted from cows during slaughter. It sounds a bit brutal even for the non-squeamish carnivore. Figuring out how to replicate the serum’s effects with synthetic ingredients has been one of the biggest hurdles to making cultured meat viable.

“Our research and development team worked diligently to replace serum with other nutrients that provide the same functionality, and their hard work over several years paid off,” said Andrew Noyes, head of communications at Good Meat’s parent company, Eat Just. The approval should allow for greater scalability, lower manufacturing costs, and a more sustainable product.

The company is in the process of building a demonstration plant in Singapore that will house a 6,000-liter bioreactor, which it says will be the largest in the industry to date and will have the capacity to make tens of thousands of pounds of meat per year.

The serum-free approval “complements the company’s work in Singapore to build and operate its bioreactor facility, where over 50 research scientists and engineers will develop innovative capabilities in the cultivated meat space such as media optimization, process development, and texturization of cultivated meat products,” said Damian Chan, executive vice president of the Singapore Economic Development Board.

It won’t be the only plant of its type. Israeli company Believer Meats opened a facility to produce lab-grown meat at scale in Israel in 2021, and last month started construction of a 200,000-square-foot factory in Wilson, North Carolina.

This past November a third player in the industry, Upside Foods, became the first company to receive a No Questions Letter from the FDA, essentially an approval saying its lab-grown chicken is safe for consumers to eat (though two additional approvals are still needed before the company can actually start selling the product).

The timing of the cultured meat industry’s advancement is convenient, though not coincidental; more consumers are becoming conscious of factory farming’s negative environmental impact, and they’re looking for eco-friendly alternatives. Cultured meat will allow them to eat real meat (as opposed to plant-based “meat”) with a far smaller environmental impact and no animals harmed to boot.

It remains to be seen whether scaling production will go as smoothly as Good Meat and its competitors are hoping, as well as how long it will take for the products to reach price parity with regular meat. But if the industry’s recent streak of clearing hurdles continues, lab-grown meat may soon be found in restaurants and on grocery shelves.

Image Credit: Good Meat

]]>
150209
Astronomers Reveal the Most Detailed Radio Image Yet of the Milky Way’s Galactic Plane https://singularityhub.com/2023/01/19/astronomers-reveal-the-most-detailed-radio-image-yet-of-the-milky-ways-galactic-plane/ Thu, 19 Jan 2023 15:00:54 +0000 https://singularityhub.com/?p=150200 Two major astronomy research programs, called EMU and PEGASUS, have joined forces to resolve one of the mysteries of our Milky Way: where are all the supernova remnants?

A supernova remnant is an expanding cloud of gas and dust marking the last phase in the life of a star, after it has exploded as a supernova. But the number of supernova remnants we have detected so far with radio telescopes is too low. Models predict five times as many, so where are the missing ones?

We have combined observations from two of Australia’s world-leading radio telescopes, the ASKAP radio telescope and the Parkes radio telescope, Murriyang, to answer this question.

The Gas Between the Stars

The new image reveals thin tendrils and clumpy clouds associated with hydrogen gas filling the space between the stars. We can see sites where new stars are forming, as well as supernova remnants.

In just this small patch, only about 1 percent of the whole Milky Way, we have discovered more than 20 new possible supernova remnants where only 7 were previously known.

These discoveries were led by PhD student Brianna Ball from Canada’s University of Alberta, working with her supervisor, Roland Kothes of the National Research Council of Canada, who prepared the image. These new discoveries suggest we are close to accounting for the missing remnants.

So why can we see them now when we couldn’t before?

The Power of Joining Forces

I lead the Evolutionary Map of the Universe or EMU program, an ambitious project with ASKAP to make the best radio atlas of the southern hemisphere.

EMU will measure about 40 million new distant galaxies and supermassive black holes to help us understand how galaxies have changed over the history of the universe.

Early EMU data have already led to the discovery of odd radio circles (or “ORCs”), and revealed rare oddities like the “Dancing Ghosts.”

For any telescope, the resolution of its images depends on the size of its aperture. Interferometers like ASKAP simulate the aperture of a much larger telescope. With 36 relatively small dishes (each 12m in diameter) but a 6km distance connecting the farthest of these, ASKAP mimics a single telescope with a 6km wide dish.

That gives ASKAP a good resolution, but comes at the expense of missing radio emission on the largest scales. In the comparison above, the ASKAP image alone appears too skeletal.

To recover that missing information, we turned to a companion project called PEGASUS, led by Ettore Carretti of Italy’s National Institute of Astrophysics.

PEGASUS uses the 64m diameter Parkes/Murriyang telescope (one of the largest single-dish radio telescopes in the world) to map the sky.

Even with such a large dish, Parkes has rather limited resolution. By combining the information from both Parkes and ASKAP, each fills in the gaps of the other to give us the best fidelity image of this region of our Milky Way galaxy. This combination reveals the radio emission on all scales to help uncover the missing supernova remnants.

Linking the datasets from EMU and PEGASUS will allow us to reveal more hidden gems. In the next few years we will have an unprecedented view of almost the entire Milky Way, about a hundred times larger than this initial image, but with the same level of detail and sensitivity.

We estimate there may be up to 1,500 or more new supernova remnants yet to discover. Solving the puzzle of these missing remnants will open new windows into the history of our Milky Way.


ASKAP and Parkes are owned and operated by CSIRO, Australia’s national science agency, as part of the Australia Telescope National Facility. CSIRO acknowledge the Wajarri Yamaji people as the Traditional Owners and native title holders of Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory, where ASKAP is located, and the Wiradjuri people as the traditional owners of the Parkes Observatory.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: R. Kothes (NRC) and the PEGASUS team

]]>
150200
The Next Step for AI in Biology Is to Predict How Proteins Behave in the Body https://singularityhub.com/2023/01/18/the-next-step-for-ai-in-biology-is-to-predict-how-proteins-behave-in-the-body/ Wed, 18 Jan 2023 18:08:33 +0000 https://singularityhub.com/?p=150187 Proteins are often called the building blocks of life.

While true, the analogy evokes images of Lego-like pieces snapping together to form intricate but rigid blocks that combine into muscles and other tissues. In reality, proteins are more like flexible tumbleweeds—highly sophisticated structures with “spikes” and branches protruding from a central frame—that morph and change with their environment.

This shapeshifting controls the biological processes of living things—for example, opening the protein tunnels dotted along neurons or driving cancerous growth. But it also makes understanding protein behavior and developing drugs that interact with proteins a challenge.

While recent AI breakthroughs in the prediction (and even generation) of protein structures are a huge advance 50 years in the making, they still only offer snapshots of proteins. To capture whole biological processes—and identify which lead to diseases—we need predictions of protein structures in multiple “poses” and, more importantly, how each of these poses changes a cell’s inner functions. And if we’re to rely on AI to solve the challenge, we need more data.

Thanks to a new protein atlas published this month in Nature, we now have a great start.

A collaboration between MIT, Harvard Medical School, Yale School of Medicine, and Weill Cornell Medical College, the study focused on a specific chemical change in proteins—called phosphorylation—that’s known to act as a protein on-off switch, and in many cases, lead to or inhibit cancer.

The atlas will help scientists dig into how signaling goes awry in tumors. But to Sean Humphrey and Elise Needham, doctors at the Royal Children’s Hospital and the University of Cambridge, respectively, who were not involved in the work, the atlas may also begin to help turn static AI predictions of protein shapes into more fluid predictions of how proteins behave in the body.

Let’s Talk About PTMs (Huh?)

After they’re manufactured, the surfaces of proteins are “dotted” with small chemical groups—like adding toppings to an ice cream cone. These toppings either enhance or turn off the protein’s activity. In other cases, parts of the protein get chopped off to activate it. Protein tags in neurons drive brain development; other tags plant red flags on proteins ready for disposal.

All these tweaks are called post-translational modifications (PTMs).

PTMs essentially transform proteins into biological microprocessors. They’re an efficient way for the cell to regulate its inner workings without needing to alter its DNA or epigenetic makeup. PTMs often dramatically change the structure and function of proteins, and in some cases, they could contribute to Alzheimer’s, cancer, stroke, and diabetes.

For Elisa Fadda at Maynooth University in Ireland and Jon Agirre at the University of York, it’s high time we incorporated PTMs into AI protein predictors like AlphaFold. While AlphaFold is changing the way we do structural biology, they said, “the algorithm does not account for essential modifications that affect protein structure and function, which gives us only part of the picture.”

The King PTM

So, what kinds of PTMs should we first incorporate into an AI?

Let me introduce you to phosphorylation. This PTM adds a chemical group, phosphate, to specific locations on proteins. It’s a “regulatory mechanism that is fundamental to life,” said Humphrey and Needham.

The protein hotspots for phosphorylation are well-known: two amino acids, serine and threonine. Roughly 99 percent of all phosphorylation sites are due to the duo, and previous studies have identified roughly 100,000 potential spots. The problem is identifying what proteins—dubbed kinases, of which there are hundreds—add the chemical groups to which hotspots.

In the new study, the team first screened over 300 kinases that specifically grab onto over 100 targets. Each target is a short string of amino acids containing serine and threonine, the “bulls-eye” for phosphorylation, and surrounded with different amino acids. The goal was to see how effective each kinase is at its job at every target—almost like a kinase matchmaking game.

This allowed the team to find the most preferred motif—sequence of amino acids—for each kinase. Surprisingly, “almost two-thirds of phosphorylation sites could be assigned to one of a small handful of kinases,” said Humphrey and Needham.

A Rosetta Stone

Based on their findings, the team grouped the kinases into 38 different motif-based classes, each with an appetite for a particular protein target. In theory, the kinases can catalyze over 90,000 known phosphorylation sites in proteins.

“This atlas of kinase motifs now lets us decode signaling networks,” said Yaffe.

In a proof-of-concept test, the team used the atlas to hunt down cellular signals that differ between healthy cells and those exposed to radiation. The test found 37 potential phosphorylation targets of a single kinase, most of which were previously unknown.

Ok, so what?

The study’s method can be used to track down other PTMs to begin building a comprehensive atlas of the cellular signals and networks that drive our basic biological functions.

The dataset, when fed into AlphaFold, RoseTTAFold, their variants, or other emerging protein structure prediction algorithms, could help them better predict how proteins dynamically change shape and interact in cells. This would be far more useful for drug discovery than today’s static protein snapshots. Scientist may also be able to use such tools to tackle the kinase “dark universe.” This subset of kinases, more than 100, have no discernible protein targets. In other words—we have no idea how these powerful proteins work inside the body.

“This possibility should motivate researchers to venture ‘into the dark’, to better characterize these elusive proteins,” said Humphrey and Needham.

The team acknowledges there’s a long road ahead, but they hope their atlas and methodology can influence others to build new databases. In the end, we hope “our comprehensive motif-based approach will be uniquely equipped to unravel the complex signaling that underlies human disease progressions, mechanisms of cancer drug resistance, dietary interventions and other important physiological processes,” they said.

Image Credit: DeepMind

]]>
150187
DARPA Wants to Develop a Drug to Make People Resistant to Extreme Cold https://singularityhub.com/2023/01/17/darpa-wants-to-develop-a-drug-to-make-people-resistant-to-extreme-cold/ Tue, 17 Jan 2023 15:00:02 +0000 https://singularityhub.com/?p=150178 From painkillers to antihistamines to caffeine and beyond, we’ve found many ways to get our bodies to tolerate uncomfortable circumstances, for better and for worse. Now DARPA wants to add another to the list: getting the human body to better tolerate extreme cold.

The idea doesn’t sound like a great one at first glance; our bodies aren’t made to live in the cold, nor even withstand it for more than a little while. Our teeth start to chatter, we shiver, and eventually lose feeling in extremities, all signals that we need to get ourselves warm, stat—otherwise we can get hypothermia, frostbite, or worse.

The Defense Advanced Research Projects Agency (DARPA) has a few different motives for this research, but the primary one shouldn’t be surprising (though it’s still a bit creepy, in my opinion): enabling soldiers to be comfortable in cold places for long periods of time. The technology, if successful, could also be used to help explorers or adventurers (at high altitudes where it’s cold or in places like Alaska or the Arctic, for example) better tolerate cold, or to treat hypothermia patients.

Last week, Rice University in Houston announced that one of its assistant professors of bioengineering, Jerzy Szablowski, received a Young Faculty Award from DARPA to research non-genetic drugs that can “temporarily enhance the human body’s resilience to extreme cold exposure.”

Thermogenesis is the use of energy to create heat, and our bodies have two different ways of doing this. One is shivering, which we’re all familiar with. The other, which Szablowski simply calls non-shivering thermogenesis, involves burning off brown adipose tissue (BAT), or brown fat.

This type of fat exists specifically to warm us up when we get cold; it stores energy and only activates in cold temperatures. Most of our body fat is white fat. It builds up when we ingest more calories than we burn and stores those calories for when we don’t get enough energy from food. An unfortunate majority of American adults have the opposite problem: too much white fat, which increases the risk of conditions like heart disease and Type 2 diabetes.

While white fat is made of fatty acids called lipids, brown fat is dense in mitochondria (the component of cells where energy production occurs). When we get cold our bodies start pumping out the hormone norepinephrine, which attaches to receptors on brown fat cells, signaling the mitochondria to create energy—and warming us up in the process.

Szablowski will be trying to find ways to boost the BAT response. “If you have a drug that makes brown fat more active, then instead of having to spend weeks and weeks adapting to cold, you can perform better within hours,” he said. He added that his research will focus on finding a site to intervene in the BAT response, “like a protein or a process in the cell that you can target with a drug.”

Is it possible to change the body’s normal BAT response without needing to burn through more brown fat, which healthy adults don’t have a ton of to spare? We’ll see. Though white fat and brown fat have different compositions, it’s possible that Szablowski’s research could lead to new ways to eliminate white fat and treat obesity as well.

Image Credit: StockSnap / Pixabay

]]>
150178
Cellular Reprogramming Extends Lifespan in Mice, Longevity Startup Says https://singularityhub.com/2023/01/16/cellular-reprogramming-extends-lifespan-in-mice-longevity-startup-says/ Mon, 16 Jan 2023 15:00:42 +0000 https://singularityhub.com/?p=150148 Billions of dollars are pouring into longevity startups as a growing body of research shows that aging might not be as inevitable as we assumed. Now, a startup claims to have reached a major milestone by extending the lifespans of healthy mice using a promising approach called cellular reprogramming.

In 2017, scientists at the Salk Institute for Biological Studies in San Diego first showed that it was possible to rejuvenate the cells of mice by resetting their epigenetic markers, chemical modifications to the DNA that don’t alter the underlying genetic code but can regulate the activity of certain genes. These changes have long been suspected of playing a crucial role in the aging process.

The researchers discovered that the approach could increase the lifespan of the mice by as much as 30 percent and significantly rejuvenate some of their tissues, but the experiments were done on animals with the mouse-equivalent of progeria, a disease that causes accelerated aging in humans.

It was unclear whether this kind of life extension would translate to normal healthy mice, but now preliminary results from a longevity startup called Rejuvenate Bio suggest that it does. A non-peer-reviewed paper published to the preprint server bioRxiv claims that the approach can double the remaining lifespan of elderly mice.

While aging cannot currently be prevented, its impact on life and healthspan can potentially be minimized by interventions that aim to return gene expression networks to optimal function,” Noah Davidsohn, chief scientific officer and co-founder of Rejuvenate Bio, said in a press release. “The study results suggest that partial reprogramming could be a potential treatment in the elderly for reversing age-associated diseases and could extend human lifespan.”

Cellular reprogramming builds on the Nobel Prize-winning work of Shinya Yamanaka, who showed that adult cells could be transformed back into stem cells by exposing them to a specific set of genome-regulating proteins known as transcription factors. The Salk team’s innovation was to reduce the exposure times to the so-called Yamanaka factors, which they found could reverse epigenetic changes to the cells without reverting them to stem cells.

While the approach led to clear increases in lifespan in prematurely aging mice, the fact that no one had been able to replicate the result in healthy mice since then raised doubts about the approach. “Different groups have tried this experiment, and the data have not been positive so far,” Alejandro Ocampo, from the University of Lausanne in Switzerland, who carried out the original Salk experiments, told MIT Technology Review.

But now, Rejuvenate Bio claims that when they exposed healthy mice near the end of their lives to a subset of the Yamanaka factors, they lived for another 18 weeks on average, compared to just 9 weeks for those that didn’t undergo cellular reprogramming.

The mice were already 124 weeks old at the time, so this only represents a 7 percent increase in lifespan. But the company says it’s still a significant demonstration of the potential life-extending powers of cellular reprogramming, and the treated mice also showed improvements in a range of health metrics.

Another reason why the research is interesting, though, is the method by which the Yamanaka factors were administered. Previous studies have generally relied on genetically modifying mice to produce the factors themselves, but this study delivered them to the animals’ cells using repurposed viruses, which is the approach used in clinically approved gene therapies.

The results have yet to be peer reviewed, so should be taken with a pinch of salt until other groups are able to replicate them, but there is growing evidence of the potential therapeutic benefits of cellular reprogramming. Recent research on mice has shown that it can boost liver regeneration and help restore sight in animals with glaucoma.

It’s likely to be a long road to human trials, as there are significant question marks about potential side effects, including concerns that the approach could increase the risk of cancer. But promisingly, recent research on mice from the Salk team has shown that long-term treatment with Yamanaka factors led to significant rejuvenating effects on the animals’ tissues without causing any cancers. The research also found that the longer the treatment time, the better the results.

Further work will need to be done to validate the research from Rejuvenate Bio, but the results suggest that age-reversing treatments may soon be within reach.

Image Credit: Alexa / Pixabay

]]>
150148
In Bioethics, the Public Deserves More Than a Seat at the Table https://singularityhub.com/2023/01/15/in-bioethics-the-public-deserves-more-than-a-seat-at-the-table/ Sun, 15 Jan 2023 15:00:03 +0000 https://singularityhub.com/?p=150169 Every time scientists present a groundbreaking biological innovation, it seems as though there is a crescendo of noise—articles beckoning for public discussion, social media posts sharing the public’s opinions, scientists urging for more public input about bioethical decisions. The noise grows and grows and then—silence.

In August 2022, two research groups published papers in Nature and Cell that demonstrated scientists’ newfound ability to create synthetic mouse embryos in the laboratory until 8.5 days post-fertilization—no egg cells, sperm cells, or wombs needed. The outcry was immediate: If this can be done with mice, are humans next?

Scientists were quick to ease the public’s worries: It’s not yet possible to create synthetic human embryos. Yet their response was concerning. Why did we need to wait until such a scientific advance occurred before we could discuss its implications? How can we have important discussions about bioethical issues—issues at the intersection of ethics and biological research—that already impact society?

Typically, when such challenging bioethical dilemmas arise, scientists and ethicists will discuss the potential implications on committees and in forums, and will often provide policy recommendations. But unfortunately, public input is not always sought—or is sought in a limited capacity. And whether their opinions make any difference to policy is an open question.

We should all have the right to not only partake in bioethical discussions—but to partake in them in an effective and impactful manner. Otherwise, we’ll go to sleep one day, wake up the next morning, and realize we live in a world that we had no hand in creating.

When it came to the mouse embryos, some scientists discussed the need for public input when making complex and controversial bioethical decisions, echoing a longstanding refrain. But creating avenues for public discussion and deliberation about bioethical issues can be difficult.

Designing public discussion opportunities is time consuming and requires the expertise of a wide variety of professionals. Meanwhile, barriers exist in the form of scientists and policymakers who believe that the public can’t meaningfully contribute to scientific discourse due to a lack of understanding.

Even if that were the case, it’s not a reason to exclude people who would be affected by such decisions. Institutions must extend the effort to both inform the public and allow them to express their opinion.

There are some initiatives that promote public deliberation, such as Harvard Medical School’s public bioethics forums, which bring together stakeholders to discuss important bioethical topics. Providing such spaces is an important first step, as it effectively opens a seat at the table. Healthy deliberation—one which allows people to hold conflicting viewpoints and actively discuss their beliefs rather than simply consume information—is critical for making bioethics a more inclusive and democratic space.

“We should all have the right to not only partake in bioethical discussions—but to partake in them in an effective and impactful manner.”

But public input doesn’t ultimately count for much if such discussions don’t exert any actual influence on policymaking. Despite their role in fostering educated discussions, initiatives such as Harvard’s do not allow citizens to contribute to new policy decisions.

Historically, there have been some attempts to do so. Since the 1970s, many countries, including the US, have implemented public deliberation as a part of bioethical decision-making, to varying degrees of success. In some instances, such as with the 1974 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, public opinion was considered and some of the commission’s final reports were heavily influential in policy. But again, it’s questionable how much input the public truly had. Their input was sought solely through public hearings. Bioethicists and policymakers comprised the commission and created the final reports.

Fortunately, more recently, there have been public deliberation efforts that provide citizens with an opportunity to influence policymaking decisions. For instance, the Citizens’ Reference Panel on Health Technologies in Ontario, Canada made a small yet critical impact on governmental decision-making. This panel was created to allow Ontarians to inform how regulatory bodies assess five health technologies. The one technology the panel had the most profound effect on was screening methods for colorectal cancers and polyps. While widespread screening has many benefits, citizens expressed some concerns about the loss of patient autonomy when screening was performed automatically without patient input. This point was added to a final recommendation document created by the Ontario Health Technology Advisory Committee, and committee members have since said that the point would have gone unnoticed had it not been for the panel.

Another example comes from Buckinghamshire in England, where a citizens’ jury expressed their opinions about how to tackle back pain, a major health problem for the county’s citizens. In this context, a citizens’ jury is a two- to five-day event where a few dozen members of the general public come together to discuss an issue and ultimately produce a recommendation document. The Buckinghamshire Health Authority, or BHA, promised that they would take the jury’s recommendations into account, and they did. The BHA then formed a project team to implement these recommendations.

This begs the question: What makes certain public deliberation efforts successful and others not?

If success is defined as a near-direct impact on policy decisions, a common theme emerges: Citizens’ panels and juries that are connected to a governmental organization tend to be more impactful policy-wise, particularly in the short term.

In both previous examples, the government was involved to varying degrees, and—perhaps more importantly—the public’s recommendations were actually prioritized. As Susan Goold, an ethicist and professor at the University of Michigan, put it in an interview with Undark, policymakers should never say “see you later” after a deliberative session.

In Buckinghamshire, as part of an agreement with the King’s Fund—a health improvements charity that was supporting this public deliberation effort—the BHA was required to follow the panel’s recommendations. If they chose not to, they had to state specific reasons. This ensured accountability and the implementation of the recommendations.

Another critical aspect of successful public deliberation efforts is appropriate organization. Julia Abelson, lead of the Public Engagement in Health Policy Project and a professor at McMaster University, explained that there are examples of government-initiated public deliberation that have had little impact as well as efforts not directly linked to the government that were very impactful.

The differentiating factor is thoughtful planning and organization. For instance, it’s critical that, during the design phase of the process, organizers set clear goals and objectives they’d like to meet by the end of deliberation.

Additionally, organizers should carefully consider how information is presented to participants. How questions are framed, for example, can affect whether new ideas emerge from participants. Another important component organizers need to consider is how discussions are moderated. For instance, are the facilitators actively shaping the discussion or solely preventing one participant from dominating the conversation?

Though some research has been done on this topic, many questions remain. What researchers know is that all of the elements above must come together to create a successful citizens’ panel that can impact policy down the line.

There is no question that public input is immensely valuable whether we’re discussing gene editing or the creation of synthetic embryos. Thankfully, the increase in the number of deliberation efforts reflects that. However, public deliberation is a tool, and like all tools, it requires a guiding hand.

We must ensure that governments are involved in deliberation efforts when necessary and that citizens’ panels are designed thoughtfully. We must do this so one day, when we go to sleep and wake up the next morning, we’ll see the sun rising on a world we’ve built together.

This article was originally published on Undark. Read the original article.

Image Credit: Furiosa-LPixabay

]]>
150169