Teaching kids to code with cultural research and embroidery machines

Caption: University of Washington researchers taught a group of high schoolers to code by combining cultural research into various embroidery traditions with “computational embroidery.” The method teaches kids to encode embroidery patterns on a computer through a coding language called Turtlestitch. Here, a student stitched plants with code, then hand-embroidered a bee. Credit: Kivuva et al./SIGCSE

Textiles and computing are more closely linked than most of us realize. It was a surprise (to me, anyway) to learn that the Jacquard loom was influential in the development of the computer (see this June 25, 2019 essay “Programming patterns: the story of the Jacquard loom” on the Science and Industry Museum in Manchester [UK] website). As for embroidery, that too has an historical link to computing (see my May 22, 2023 posting “Ada Lovelace’s skills (embroidery, languages, and more) led to her pioneering computer work in the 19th century“).

The latest embroidery link to computing was announced in a March 14, 2024 news item on phys.org, Note: A link has been removed,

Even in tech-heavy Washington state, the numbers of students with access to computer science classes aren’t higher than national averages: In the 2022–2023 school year, 48% of public high schools offered foundational CS [computer science] classes and 5% of middle school and high school students took such classes.

Those numbers have inched up, but historically marginalized populations are still less likely to attend schools teaching computer science, and certain groups—such as Latinx students and young women—are less likely than their peers to be enrolled in the classes even if the school offers them.

To reach a greater diversity of grade-school students, University of Washington researchers have taught a group of high schoolers to code by combining cultural research into various embroidery traditions—such as Mexican, Arab and Japanese—with “computational embroidery.” The method lets users encode embroidery patterns on a computer through an open-source coding language called Turtlestitch, in which they fit visual blocks together. An electronic embroidery machine then stitches the patterns into fabric.

A March 14, 2024 University of Washington news release (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

“We’ve come a long way as a country in offering some computer science courses in schools,” said co-lead author F. Megumi Kivuva, a UW doctoral student in the Information School. “But we’re learning that access doesn’t necessarily mean equity. It doesn’t mean underrepresented minority groups are always getting the opportunity to learn. And sometimes all it means is that if there’s one 20-student CS class, all 3,000 students at the school count as having ‘access.’ [emphases mine] Our computational embroidery class was really a way to engage diverse groups of students and show that their identities have a place in the classroom.”

In designing the course, the researchers aimed to make coding accessible to a demographically diverse group of 12 students. To make space for them to explore their curiosities, the team used a method called “co-construction” where the students had a say each week in what they learned and how they’d be assessed.

“We wanted to dispel the myth that a coder is someone sitting in a corner, not being very social, typing on their computer,” Kivuva said.

Before delving into Turtlestitch, students spent a week exploring cultural traditions in embroidery — whether those connected to their own cultures or those they were curious about. For one student, bringing his identity into the work meant taking inspiration from his Mexican heritage; for others, it meant embroidering an image of bubble tea because it’s her favorite drink, or stitching a corgi.

Students also spent a week learning to embroider by hand. The craft is an easy fit for coding because both rely on structures of repetition. But embroidery is tactile, so students were able to see their code move from the screen into the physical world. They were also able to augment what they coded with hand stitching, letting them distinguish what the human and the machine were good at. For instance, one student decided to code the design for a flower, then add a bee by hand.

“There’s a long history of overlooking crafts that have traditionally been perceived as feminized,” said co-lead author Jayne Everson, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “So combining this overlooked art that is deeply technical with computing was really fun, because I don’t see computing as more or less technical than embroidery.”

The class ran for six weeks over the summer, and researchers were impressed by the interest it elicited. In fact, one of the main drawbacks researchers found was that six weeks felt too short, given the curiosity the students showed. Since the technology is affordable — the embroidery machine is $400 and the software is free — Kivuva plans to tailor the course to be approachable for kindergarteners to 5th-grade refugee students. Since they were so pleased with the high student engagement, Kivuva and Everson will also run a workshop on their method at the Computer Science Teachers Association [CSTA] conference this summer.

“I was constantly blown away by the way students were engaging when they were given freedom. Some were staying after class to keep working,” said Everson. “I come from a math and science teaching background. To get students to stick around after class is kind of like, ‘Alright, we’ve done it. That’s all I want.’”

Additional co-authors on the paper were Camilo Montes De Haro, a UW undergraduate researcher in the iSchool, and Amy J. Ko, a UW professor in the iSchool. This research was funded by the National Science Foundation, Micorosoft, Adobe and Google.

I wanted to know a little more about equity and access and found this in the introduction to the paper (link to and citation for the paper follow or there’s the PDF of the paper),

Efforts to broaden participation in computing at the K-12 level have
led to an increasing number of schools (53%) offering CS, however,
participation is low. Code.org reports that 6% of high school, 3.9%
of middle school, and 7.3% of primary school students are enrolled
[ 4]. Furthermore, historically marginalized populations are also
underrepresented in K-12 CS [4 , 9]. Prior work suggests that there
are systemic barriers like sexism, racism, and classism that lead to
inequities in primary and secondary computing education [9].

Here’s a link to and a citation for the paper,

Cultural-Centric Computational Embroidery by F. Megumi Kivuva, Jayne Everson, Camilo Montes De Haro, and Amy J. Ko. SIGCSE 2024: Proceedings of the 55th ACM [Association of Computing Machinery] Technical Symposium on Computer Science Education V. 1March 2024Pages 673–679 DOI: https://doi.org/10.1145/3626252.3630818 Published: 07 March 2024

This paper is open access.

The Computer Science Teachers Association (CSTA) 2024 conference mentioned in the news release is being held in Las Vegas, Nevada, July 16 -19, 2024.

Exploring biodiversity beyond boundaries and participatory (citizen) science

As this has been confusing to me with the two terms being used interchangeably, I investigated and, based on the findings, believe that ‘participatory sciences’ is a larger classification (subject) term, which includes ‘citizen science’ as a specific subset (type) of participatory science.

Bearing that in mind, here’s more from a May 29, 2024 letter/notice received via email about an upcoming participatory sciences conference,

There are so many areas where participatory sciences are creating a better understanding of the world around us. Sometimes looking at just one of those areas can help us see where there is real strength in these practices–and where combined work across this field can inspire huge change.

Right now, biodiversity is on my mind. 

Last week’s International Day of Biological Diversity invited everyone on the planet to be #PartOfThePlan to protect the systems that sustain us. The Biodiversity Plan calls for scientific collaborations, shared commitments, tracking indicators of progress, and developing transparent communication and engagement around actions by the end of this decade.

Participatory science projects have proven–but underutilized–potential to address spatial and temporal gaps in datasets; engage multiple ways of knowing; inform multilateral environmental agreements; and inspire action and change based on improved understandings of the systems that sustain us.

In this field, we have the the tools, experience, and vision to rise to this global challenge. What would it take to leverage the full power of participatory sciences to inspire and inform wise decisions for people and the planet?

If you are working in, or interested in, the frontiers of participatory sciences to address global challenges like biodiversity, you can be part of driving strategies and solutions at next week’s action-oriented stand on biodiversity at CAPS 2024, [Conference for Advancing the Participatory Sciences] June 3-6. Woven throughout the virtual four-day event are sessions that will both inform and inspire collaborative problem solving to improve how the participatory sciences are leveraged to confront the biodiversity crisis.

There will be opportunities in the program to share your thoughts and experiences, whether or not you are giving a talk.  This event is designed to bring together a diversity of perspectives from across the Americas and beyond.

The strand is a collaboration between AAPS [Association for Advancing Participatory Sciences], the Red Iberoamericana de Cienci A Participativa (the Iberoamerican Network of Participatory Science), iDigBio [Integrated Digitized Biocollections], and Florida State University’s Institute for Digital Information & Scientific Communication.  

CAPS 2024 Biodiversity Elements:

Collaborative Sessions Addressing Biodiversity Knowledge

Each day, multiple sessions will convene global leaders, practitioners, and others to discuss how to advance biodiversity knowledge worldwide. Formats include daily symposia, ideas-to-action conversations, virtual multi-media posters, and lightning talk discussions. Our virtual format provides plenty of opportunities for exchanges. 

Find the full biodiversity strand program here >

Plenary Symposia: Biodiversity Beyond Boundaries

Join global leaders as they share their work to span boundaries to create connected knowledge for biodiversity research and action. 

Learn more about the Plenary Symposia >

Biodiversity-themed Virtual Posters and Live Poster Sessions

Over one-third of the 100+ posters focus specifically on advancing biodiversity-related participatory science. Each day, poster sessions highlight a selection of posters via lightning talks and group discussions.  

Our media-rich virtual poster platform lets you easily scroll through all of the posters and chat with presenters on your own time – even from your phone!

View the full poster presenter list here >

There is still time to register!

Sign up now to ensure a seamless conference experience.

We have tiered registration rates to enable equitable access to the event, and to support delivery of future programming for everyone.

Register Here

This image is from the May 22, 2024 International Day of Biological Diversity,

The unrestricted exploitation of wildlife has led to the disappearance of many animal species at an alarming rate, destroying Earth’s biological diversity and upsetting the ecological balance Photo:Vladimir Wrangel/Adobe Stock

Graphene-like materials for first smart contact lenses with AR (augmented reality) vision, health monitoring, & content surfing?

A March 6, 2024 XPANCEO news release on EurekAlert (also posted March 11, 2024 on the Graphene Council blog) and distributed by Mindset Consulting announced smart contact lenses devised with graphene-like materials,

XPANCEO, a deep tech company developing the first smart contact lenses with XR vision, health monitoring, and content surfing features, in collaboration with the Nobel laureate Konstantin S. Novoselov (National University of Singapore, University of Manchester) and professor Luis Martin-Moreno (Instituto de Nanociencia y Materiales de Aragon), has announced in Nature Communications a groundbreaking discovery of new properties of rhenium diselenide and rhenium disulfide, enabling novel mode of light-matter interaction with huge potential for integrated photonics, healthcare, and AR. Rhenium disulfide and rhenium diselenide are layered materials belonging to the family of graphene-like materials. Absorption and refraction in these materials have different principal directions, implying six degrees of freedom instead of a maximum of three in classical materials. As a result, rhenium disulfide and rhenium diselenide by themselves allow controlling the light propagation direction without any technological steps required for traditional materials like silicon and titanium dioxide.

The origin of such surprising light-matter interaction of ReS2 and ReSe2 with light is due to the specific symmetry breaking observed in these materials. Symmetry plays a huge role in nature, human life, and material science. For example, almost all living things are built symmetrically. Therefore, in ancient times symmetry was also called harmony, as it was associated with beauty. Physical laws are also closely related to symmetry, such as the laws of conservation of energy and momentum. Violation of symmetry leads to the appearance of new physical effects and radical changes in the properties of materials. In particular, the water-ice phase transition is a consequence of a decrease in the degree of symmetry. In the case of ReS2 and ReSe2, the crystal lattice has the lowest possible degree of symmetry, which leads to the rotation of optical axes – directions of symmetry of optical properties of the material, which was previously observed only for organic materials. As a result, these materials make possible to control the direction of light by changing the wavelength, which opens a unique way for light manipulation in next-generation devices and applications. 

“The discovery of unique properties in anisotropic materials is revolutionizing the fields of nanophotonics and optoelectronics, presenting exciting possibilities. These materials serve as a versatile platform for the advancement of optical devices, such as wavelength-switchable metamaterials, metasurfaces, and waveguides. Among the promising applications is the development of highly efficient biochemical sensors. These sensors have the potential to outperform existing analogs in terms of both sensitivity and cost efficiency. For example, they are anticipated to significantly reduce the expenses associated with hospital blood testing equipment, which is currently quite costly, potentially by several orders of magnitude. This will also allow the detection of dangerous diseases and viruses, such as cancer or COVID, at earlier stages,” says Dr. Valentyn S. Volkov, co-founder and scientific partner at XPANCEO, a scientist with an h-Index of 38 and over 8000 citations in leading international publications.

Beyond the healthcare industry, these novel properties of graphene-like materials can find applications in artificial intelligence and machine learning, facilitating the development of photonic circuits to create a fast and powerful computer suitable for machine learning tasks. A computer based on photonic circuits is a superior solution, transmitting more information per unit of time, and unlike electric currents, photons (light beams) flow across one another without interacting. Furthermore, the new material properties can be utilized in producing smart optics, such as contact lenses or glasses, specifically for advancing AR [augmented reality] features. Leveraging these properties will enhance image coloration and adapt images for individuals with impaired color perception, enabling them to see the full spectrum of colors.

Here’s a link to and a citation for the paper,

Wandering principal optical axes in van der Waals triclinic materials by Georgy A. Ermolaev, Kirill V. Voronin, Adilet N. Toksumakov, Dmitriy V. Grudinin, Ilia M. Fradkin, Arslan Mazitov, Aleksandr S. Slavich, Mikhail K. Tatmyshevskiy, Dmitry I. Yakubovsky, Valentin R. Solovey, Roman V. Kirtaev, Sergey M. Novikov, Elena S. Zhukova, Ivan Kruglov, Andrey A. Vyshnevyy, Denis G. Baranov, Davit A. Ghazaryan, Aleksey V. Arsenin, Luis Martin-Moreno, Valentyn S. Volkov & Kostya S. Novoselov. Nature Communications volume 15, Article number: 1552 (2024) DOI: https://doi.org/10.1038/s41467-024-45266-3 Published: 06 March 2024

This paper is open access.

A kintsugi approach to fusion energy: seeing the beauty (strength) in your flaws

Kintsugi is the Japanese word for a type of repair that is also art. “Golden joinery” is the literal meaning of the word, from the Traditional Kyoto. Culture_Kintsugi webpage,

Caption: An example of kintsugi repair by David Pike. (Photo courtesy of David Pike) [downloaded from https://traditionalkyoto.com/culture/kintsugi/]

A March 5, 2024 news item on phys.org links the art of kintsugi to fusion energy, specifically, managing plasma, Note: Links have been removed,

In the Japanese art of Kintsugi, an artist takes the broken shards of a bowl and fuses them back together with gold to make a final product more beautiful than the original.

That idea is inspiring a new approach to managing plasma, the super-hot state of matter, for use as a power source. Scientists are using the imperfections in magnetic fields that confine a reaction to improve and enhance the plasma in an approach outlined in a paper in the journal Nature Communications.

A March 5, 2024 Princeton Plasma Physics Laboratory (PPPL) news release (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

“This approach allows you to maintain a high-performance plasma, controlling instabilities in the core and the edge of the plasma simultaneously. That simultaneous control is particularly important and difficult to do. That’s what makes this work special,” said Joseph Snipes of the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). He is PPPL’s deputy head of the Tokamak Experimental Science Department and was a co-author of the paper.

PPPL Physicist Seong-Moo Yang led the research team, which spans various institutions in the U.S. and South Korea. Yang says this is the first time any research team has validated a systematic approach to tailoring magnetic field imperfections to make the plasma suitable for use as a power source. These magnetic field imperfections are known as error fields. 

“Our novel method identifies optimal error field corrections, enhancing plasma stability,” Yang said. “This method was proven to enhance plasma stability under different plasma conditions, for example, when the plasma was under conditions of high and low magnetic confinement.”

Errors that are hard to correct

Error fields are typically caused by minuscule defects in the magnetic coils of the device that holds the plasma, which is called a tokamak. Until now, error fields were only seen as a nuisance because even a very small error field could cause a plasma disruption that halts fusion reactions and can damage the walls of a fusion vessel. Consequently, fusion researchers have spent considerable time and effort meticulously finding ways to correct error fields.

“It’s quite difficult to eliminate existing error fields, so instead of fixing these coil irregularities, we can apply additional magnetic fields surrounding the fusion vessel in a process known as error field correction,” Yang said. 

In the past, this approach would have also hurt the plasma’s core, making the plasma unsuitable for fusion power generation. This time, the researchers were able to eliminate instabilities at the edge of the plasma and maintain the stability of the core. The research is a prime example of how PPPL researchers are bridging the gap between today’s fusion technology and what will be needed to bring fusion power to the electrical grid. 

“This is actually a very effective way of breaking the symmetry of the system, so humans can intentionally degrade the confinement. It’s like making a very tiny hole in a balloon so that it will not explode,” said SangKyeun Kim, a staff research scientist at PPPL and paper co-author. Just as air would leak out of a small hole in a balloon, a tiny quantity of plasma leaks out of the error field, which helps to maintain its overall stability.

Managing the core and the edge of the plasma simultaneously

One of the toughest parts of managing a fusion reaction is getting both the core and the edge of the plasma to behave at the same time. There are ideal zones for the temperature and density of the plasma in both regions, and hitting those targets while eliminating instabilities is tough.

This study demonstrates that adjusting the error fields can simultaneously stabilize both the core and the edge of the plasma. By carefully controlling the magnetic fields produced by the tokamak’s coils, the researchers could suppress edge instabilities, also known as edge localized modes (ELMs), without causing disruptions or a substantial loss of confinement.

“We are trying to protect the device,” said PPPL Staff Research Physicist Qiming Hu, an author of the paper. 

Extending the research beyond KSTAR

The research was conducted using the KSTAR tokamak in South Korea, which stands out for its ability to adjust its magnetic error field configuration with great flexibility. This capability is crucial for experimenting with different error field configurations to find the most effective ones for stabilizing the plasma.

The researchers say their approach has significant implications for the design of future tokamak fusion pilot plants, potentially making them more efficient and reliable. They are currently working on an artificial intelligence (AI) version of their control system to make it more efficient.

“These models are fairly complex; they take a bit of time to calculate. But when you want to do something in a real-time control system, you can only afford a few milliseconds to do a calculation,” said Snipes. “Using AI, you can basically teach the system what to expect and be able to use that artificial intelligence to predict ahead of time what will be necessary to control the plasma and how to implement it in real-time.”

While their new paper highlights work done using KSTAR’s internal magnetic coils, Hu suggests future research with magnetic coils outside of the fusion vessel would be valuable because the fusion community is moving away from the idea of housing such coils inside the vacuum-sealed vessel due to the potential destruction of such components from the extreme heat of the plasma.

Researchers from the Korea Institute of Fusion Energy (KFE), Columbia University, and Seoul National University were also integral to the project.

The research was supported by: the U.S. Department of Energy under contract number DE-AC02-09CH11466; the Ministry of Science and ICT under the KFE R&D Program “KSTAR Experimental Collaboration and Fusion Plasma Research (KFE-EN2401-15)”; the National Research Foundation (NRF) grant No. RS-2023-00281272 funded through the Korean Ministry of Science, Information and Communication Technology and the New Faculty Startup Fund from Seoul National University; the NRF under grants No. 2019R1F1A1057545 and No. 2022R1F1A1073863; the National R&D Program through the NRF funded by the Ministry of Science & ICT (NRF-2019R1A2C1010757).

Here’s a link to and a citation for the paper,

Tailoring tokamak error fields to control plasma instabilities and transport by SeongMoo Yang, Jong-Kyu Park, YoungMu Jeon, Nikolas C. Logan, Jaehyun Lee, Qiming Hu, JongHa Lee, SangKyeun Kim, Jaewook Kim, Hyungho Lee, Yong-Su Na, Taik Soo Hahm, Gyungjin Choi, Joseph A. Snipes, Gunyoung Park & Won-Ha Ko. Nature Communications volume 15, Article number: 1275 (2024) DOI: https://doi.org/10.1038/s41467-024-45454-1 Published: 10 February 2024

This paper is open access.

Squirrel observations in St. Louis: a story of bias in citizen science data

Squirrels and other members of the family Sciuridae. Credit: Chicoutimi (montage) Karakal AndiW National Park Service en:User:Markus Krötzsch The Lilac Breasted Roller Nico Conradie from Centurion, South Africa Hans Hillewaert Sylvouille National Park Service – Own work from Wikipedia/CC by 3.0 licence

A March 5, 2024 news item on phys.org introduces a story about squirrels, bias, and citizen science,

When biologist Elizabeth Carlen pulled up in her 2007 Subaru for her first look around St. Louis, she was already checking for the squirrels. Arriving as a newcomer from New York City, Carlen had scrolled through maps and lists of recent sightings in a digital application called iNaturalist. This app is a popular tool for reporting and sharing sightings of animals and plants.

People often start using apps like iNaturalist and eBird when they get interested in a contributory science project (also sometimes called a citizen science project). Armed with cellphones equipped with cameras and GPS, app-wielding volunteers can submit geolocated data that iNaturalist then translates into user-friendly maps. Collectively, these observations have provided scientists and community members greater insight into the biodiversity of their local environment and helped scientists understand trends in climate change, adaptation and species distribution.

But right away, Carlen ran into problems with the iNaturalist data in St. Louis.

A March 5, 2024 Washington University in St. Louis news release (also on EurekAlert) by Talia Ogliore, which originated the news item, describes the bias problem and the research it inspired, Note: Links have been removed,

“According to the app, Eastern gray squirrels tended to be mostly spotted in the south part of the city,” said Carlen, a postdoctoral fellow with the Living Earth Collaborative at Washington University in St. Louis. “That seemed weird to me, especially because the trees, or canopy cover, tended to be pretty even across the city.

“I wondered what was going on. Were there really no squirrels in the northern part of the city?” Carlen said. A cursory drive through a few parks and back alleys north of Delmar Boulevard told her otherwise: squirrels galore.

Carlen took to X, formerly Twitter, for advice. “Squirrels are abundant in the northern part of the city, but there are no recorded observations,” she mused. Carlen asked if others had experienced similar issues with iNaturalist data in their own backyards.

Many people responded, voicing their concerns and affirming Carlen’s experience. The maps on iNaturalist seemed clear, but they did not reflect the way squirrels were actually distributed across St. Louis. Instead, Carlen was looking at biased data.

Previous research has highlighted biases in data reported to contributory science platforms, but little work has articulated how these biases arise.

Carlen reached out to the scientists who responded to her Twitter post to brainstorm some ideas. They put together a framework that illustrates how social and ecological factors combine to create bias in contributory data. In a new paper published in People & Nature, Carlen and her co-authors shared this framework and offered some recommendations to help address the problems.

The scientists described four kinds of “filters” that can bias the reported species pool in contributory science projects:

* Participation filter. Participation reflects who is reporting the data, including where those people are located and the areas they have access to. This filter also may reflect whether individuals in a community are aware of an effort to collect data, or if they have the means and motivation to collect it.

* Detectability filter. An animal’s biology and behavior can impact whether people record it. For example, people are less likely to report sightings of owls or other nocturnal species.

* Sampling filter. People might be more willing to report animals they see when they are recreating (i.e. hanging out in a park), but not what they see while they’re commuting.

* Preference filter. People tend to ignore or filter out pests, nuisance species and uncharismatic or “boring” species. (“There’s not a lot of people photographing rats and putting them on iNaturalist — or pigeons, for that matter,” Carlen said.)

In the paper, Carlen and her team applied their framework to data recorded in St. Louis as a case study. They showed that eBird and iNaturalist observations are concentrated in the southern part of the city, where more white people live. Uneven participation in St. Louis is likely a consequence of variables, such as race, income, and/or contemporary politics, which differ between northern and southern parts of the city, the authors wrote. The other filters of detectability, sampling and preference also likely influence species reporting in St. Louis.

Biased and unrepresentative data is not just a problem for urban ecologists, even if they are the ones who are most likely to notice it, Carlen said. City planners, environmental consultants and local nonprofits all sometimes use contributory science data in their work.

“We need to be very conscious about how we’re using this data and how we’re interpreting where animals are,” Carlen said.

Carlen shared several recommendations for researchers and institutions that want to improve contributory science efforts and help reduce bias. Basic steps include considering cultural relevance when designing a project, conducting proactive outreach with diverse stakeholders and translating project materials into multiple languages.

Data and conclusions drawn from contributory projects should be made publicly available, communicated in accessible formats and made relevant to participants and community members.

“It’s important that we work with communities to understand what their needs are — and then build a better partnership,” Carlen said. “We can’t just show residents the app and tell them that they need to use it, because that ignores the underlying problem that our society is still segregated and not everyone has the resources to participate.

“We need to build relationships with the community and understand what they want to know about the wildlife in their neighborhood,” Carlen said. “Then we can design projects that address those questions, provide resources and actively empower community members to contribute to data collection.”

Here’s a link to and a citation for the paper,

A framework for contextualizing social-ecological biases in contributory science data by Elizabeth J. Carlen, Cesar O. Estien, Tal Caspi, Deja Perkins, Benjamin R. Goldstein, Samantha E. S. Kreling, Yasmine Hentati, Tyus D. Williams, Lauren A. Stanton, Simone Des Roches, Rebecca F. Johnson, Alison N. Young, Caren B. Cooper, Christopher J. Schell. People & Nature Volume 6, Issue 2 April 2024 Pages 377-390 DI: https://doi.org/10.1002/pan3.10592 First published: 03 March 2024

This paper is open access.

Deriving gold from electronic waste

Caption: The gold nugget obtained from computer motherboards in three parts. The largest of these parts is around five millimetres wide. Credit: (Photograph: ETH Zurich / Alan Kovacevic)

A March 1, 2024 ETH Zurich press release (also on EurekAlert but published February 29, 2024) by Fabio Bergamin describes research into reclaiming gold from electronic waste, Note: A link has been removed.

In brief

  • Protein fibril sponges made by ETH Zurich researchers are hugely effective at recovering gold from electronic waste.
  • From 20 old computer motherboards, the researchers retrieved a 22-​carat gold nugget weighing 450 milligrams.
  • Because the method utilises various waste and industry byproducts, it is not only sustainable but cost effective as well.

Transforming base materials into gold was one of the elusive goals of the alchemists of yore. Now Professor Raffaele Mezzenga from the Department of Health Sciences and Technology at ETH Zurich has accomplished something in that vein. He has not of course transformed another chemical element into gold, as the alchemists sought to do. But he has managed to recover gold from electronic waste using a byproduct of the cheesemaking process.

Electronic waste contains a variety of valuable metals, including copper, cobalt, and even significant amounts of gold. Recovering this gold from disused smartphones and computers is an attractive proposition in view of the rising demand for the precious metal. However, the recovery methods devised to date are energy-​intensive and often require the use of highly toxic chemicals. Now, a group led by ETH Professor Mezzenga has come up with a very efficient, cost-​effective, and above all far more sustainable method: with a sponge made from a protein matrix, the researchers have successfully extracted gold from electronic waste.

Selective gold adsorption

To manufacture the sponge, Mohammad Peydayesh, a senior scientist in Mezzenga’s Group, and his colleagues denatured whey proteins under acidic conditions and high temperatures, so that they aggregated into protein nanofibrils in a gel. The scientists then dried the gel, creating a sponge out of these protein fibrils.

To recover gold in the laboratory experiment, the team salvaged the electronic motherboards from 20 old computer motherboards and extracted the metal parts. They dissolved these parts in an acid bath so as to ionise the metals.

When they placed the protein fibre sponge in the metal ion solution, the gold ions adhered to the protein fibres. Other metal ions can also adhere to the fibres, but gold ions do so much more efficiently. The researchers demonstrated this in their paper, which they have published in the journal Advanced Materials.

As the next step, the researchers heated the sponge. This reduced the gold ions into flakes, which the scientists subsequently melted down into a gold nugget. In this way, they obtained a nugget of around 450 milligrams out of the 20 computer motherboards. The nugget was 91 percent gold (the remainder being copper), which corresponds to 22 carats.

Economically viable

The new technology is commercially viable, as Mezzenga’s calculations show: procurement costs for the source materials added to the energy costs for the entire process are 50 times lower than the value of the gold that can be recovered.

Next, the researchers want to develop the technology to ready it for the market. Although electronic waste is the most promising starting product from which they want to extract gold, there are other possible sources. These include industrial waste from microchip manufacturing or from gold-​plating processes. In addition, the scientists plan to investigate whether they can manufacture the protein fibril sponges out of other protein-​rich byproducts or waste products from the food industry.

“The fact I love the most is that we’re using a food industry byproduct to obtain gold from electronic waste,” Mezzenga says. In a very real sense, he observes, the method transforms two waste products into gold. “You can’t get much more sustainable than that!”

If you have a problem accessing either of the two previously provided links to the press release, you can try this February 29, 2024 news item on ScienceDaily.

Here’s a link to and a citation for the paper,

Gold Recovery from E-Waste by Food-Waste Amyloid Aerogels by Mohammad Peydayesh, Enrico Boschi, Felix Donat, Raffaele Mezzenga. DOI: https://doi.org/10.1002/adma.202310642 First published online: 23 January 2024

This paper is open access.

Corporate venture capital (CVC) and the nanotechnology market plus 2023’s top 10 countries’ nanotechnolgy patents

I have two brief nanotechnology commercialization stories from the same publication.

Corporate venture capital (CVC) and the nano market

From a March 23, 2024 article on statnano.com, Note: Links have been removed,

Nanotechnology’s enormous potential across various sectors has long attracted the eye of investors, keen to capitalise on its commercial potency.

Yet the initial propulsion provided by traditional venture capital avenues was reined back when the reality of long development timelines, regulatory hurdles, and difficulty in translating scientific advances into commercially viable products became apparent.

While the initial flurry of activity declined in the early part of the 21st century, a new kid on the investing block has proved an enticing option beyond traditional funding methods.

Corporate venture capital has, over the last 10 years emerged as a key plank in turning ideas into commercial reality.

Simply put, corporate venture capital (CVC) has seen large corporations, recognising the strategic value of nanotechnology, establish their own VC arms to invest in promising start-ups.

The likes of Samsung, Johnson & Johnson and BASF have all sought to get an edge on their competition by sinking money into start-ups in nano and other technologies, which could deliver benefits to them in the long term.

Unlike traditional VC firms, CVCs invest with a strategic lens, aligning their investments with their core business goals. For instance, BASF’s venture capital arm, BASF Venture Capital, focuses on nanomaterials with applications in coatings, chemicals, and construction.

It has an evergreen EUR 250 million fund available and will consider everything from seed to Series B investment opportunities.

Samsung Ventures takes a similar approach, explaining: “Our major investment areas are in semiconductors, telecommunication, software, internet, bioengineering and the medical industry from start-ups to established companies that are about to be listed on the stock market.

While historically concentrated in North America and Europe, CVC activity in nanotechnology is expanding to Asia, with China being a major player.

China has, perhaps not surprisingly, seen considerable growth over the last decade in nano and few will bet against it being the primary driver of innovation over the next 10 years.

As ever, the long development cycles of emerging nano breakthroughs can frequently deter some CVCs with shorter investment horizons.

2023 Nanotechnology patent applications: which countries top the list?

A March 28, 2024 article from statnano.com provides interesting data concerning patent applications,

In 2023, a total of 18,526 nanotechnology patent applications were published at the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO). The United States accounted for approximately 40% of these nanotechnology patent publications, followed by China, South Korea, and Japan in the next positions.

According to a statistical analysis conducted by StatNano using data from the Orbit database, the USPTO published 84% of the 18,526 nanotechnology patent applications in 2023, which is more than five times the number published by the EPO. However, the EPO saw a nearly 17% increase in nanotechnology patent publications compared to the previous year, while the USPTO’s growth was around 4%.

Nanotechnology patents are defined based on the ISO/TS 18110 standard as those having at least one claim related to nanotechnology orpatents classified with an IPC classification code related to nanotechnology such as B82.

From the March 28, 2024 article,

Top 10 Countries Based on Published Patent Applications in the Field of Nanotechnology in USPTO in 2023

Rank1CountryNumber of nanotechnology published patent applications in USPTONumber of nanotechnology published patent applications in EPOGrowth rate in USPTOGrowth rate in EPO
1United States6,9264923.20%17.40%
2South Korea1,71547613.40%8.40%
3China1,6275694.20%47.40%
4Taiwan1,118615.00%-12.90%
5Japan1,113445-1.20%9.30%
6Germany484229-10.20%15.70%
7England331505.10%16.30%
8France323145-8.00%17.90%
9Canada290125.10%-14.30%
10Saudi Arabia268322.40%0.00%
1- Ranking based on the number of nanotechnology patent applications at the USPTO

If you have a bit of time and interest, I suggest reading the March 28, 2024 article in its entirety.