Bitter Almonds: A Taste of Cyanide

Cyanide is perhaps one of the most notorious poisons in the world. It comes in many different forms, and has been a popular method of dispatch throughout history. Cyanide is famously implicated in many real-life murders and suicides, and continues to be a popular trope in fictional writing due to its deadly reputation and universal infamy.

A Natural Killer

Cyanide is found in a number of plants and stone fruits, in the form of a compound called amygdalin. This compound is metabolised in the small intestine, where it reacts with hydrochloric acid found in the stomach to form hydrogen cyanide. Amygdalin is present in peach stones, apple seeds, and of course almonds. When it is said that cyanide tastes and smells of bitter almonds, it is actually the other way around; almonds taste of cyanide due to the high concentration of amygdalin.

However, this is not to suggest that any of these plants are inherently dangerous to consume. One would have to eat around 1000 apple seeds in a very small space of time in order to ingest a lethal dose of cyanide. Indeed, in the case of the muder of Sarah Hart in 1845, it was suggested that cyanide had been ingested accidentally through the victim’s consumption of apples. The jury, of course, saw through this rather feeble defence and the accused (John Tawell) was found guilty.

Dangerous Compounds

Cyanide is often found in the form of compound salts, normally potassium or sodium. Sucide capsules given to spies in the early twentieth century usually contained potassium cyanide, as it was quickly absorbed and thus ensured a speedy death. Potassium cyanide was also once a commonly used insecticide, where it was mixed with water to produce highly toxic hydrogen cyanide gas. This process was infamously perfected by the Nazis during WWII, when they used a cyanide based pesticide called Zyklon-B to murder around a million people in concentration camps throughout Europe.

Zyklon B cannisters in Auschwitz

Sniffing Out an Antidote

Cyanide is particularly deadly because it acts so quickly on the body. Once hydrogen cyanide is absorbed into the bloodstream, it rapidly attacks cells and prevents the body absorbing oxygen; the victim essentially suffocates. Most victims die within an hour of ingestion of a lethal dose, and sometimes within a few minutes. Symptoms include anxiety, confusion, nausea, and headaches, which are quickly followed by seizures and respiratory failure.

The first known antidote to cyanide was amyl nitrate, which is now recreationally known as ‘poppers’. When inhaled, the amyl nitrate provides a different substance for the hydrogen cyanide to bond with, allowing the body to process and excrete the cyanide. Many other modern antidotes work in a similar way, but often come with serious side effects and complications. Even today 95% of cyanide poisonings, both accidental and deliberate, are fatal.

Sources and Further Reading

Harkup, K., A is for Arsenic; The Poisons of Agatha Christie, (London, 2016)

https://wellcomecollection.org/works/q353b7hr/items?canvas=9&langCode=eng&sierraId=b22462533

https://www.sciencedirect.com/topics/pharmacology-toxicology-and-pharmaceutical-science/cyanide

Mortal Remains: The Art of Embalming

The art of embalming is believed to have been started by ancient civilisations, most notably the Ancient Egyptians, who are perhaps most famous for their mummification techniques. However, the desire to preserve the body has endured over the millenia, and many different techniques have been used to try and forestall the decay of the deceased.

Medieval Methods

Until quite recently, in Western civilizations there was considered to be little need to preserve the earthly body. Papal decree forbade most dismemberment of the body, and funerary preparations were mostly limited to washing and clothing the deceased. However, many famous figures did have at least part of their body preserved. For example, the famous Scottish warrior Robert the Bruce requested that his heart be removed and preserved so that it could be taken on the Crusades by his friend James Douglas. Medieval methods of preservation were somewhat more crude than the intricate procedure that was later developed; Bruce’s heart was submerged in hot tar, essentially cooking it. It was carried in a canister around Douglas’ neck for several years before eventually being returned  to Scotland for burial.

Sometimes, preserving the body was a practical need rather than a romantic desire. This was often the case if an important figure died unexpectedly. Prior to the Renaissance, if a body needed to be kept from decomposing, it was generally submerged in alcohol. There are many accounts of this taking place, especially to those who had died at sea. Perhaps best known of these is Admiral Nelson, whose body was placed in a barrel of brandy after he was killed during the Battle of Trafalgar. 

Modern Embalming

Where it was available, bodies were sometimes put on ice or special cooling mats. However, these were only viable for short periods of time and were generally unsuitable for transporting bodies long distances. With the advent of railways, people began to request burial in places much further afield from where they lived or died. It therefore became necessary to find methods of preservation that were more reliable than old techniques.

Before formaldehyde became the embalming fluid of choice, arsenic was often used to preserve bodies. During the 1830s, arsenic was commonly used to preserve cadavers for medical students. Its use was accelerated by the advent of the American Civil War, where unprecedented numbers of soldiers often had to be transported hundreds of miles for burial. The practice was popularised by the embalming of Abraham Lincoln, whose body was meticulously preserved in order to carry out a grand funeral procession that lasted three weeks.

A syringe set used for embalming, dating from around 1790

Embalming Today

Today, it is not often necessary to preserve human bodies for long periods of time unless they are being donated to medical science. Modern refrigeration means that bodies can be kept from decaying without the need for invasive techniques, and embalming is usually only performed as part of funeral preparations when requested by the family. Embalming is much more common in the US, where open casket funerals followed by burial are far more popular than cremation. This is the opposite of the UK, where 90% of people opt for cremation, though there is now growing popularity for ‘green burials’.

Modern embalmers carry out the process by injecting the body with a formaldehyde solution (known as formalin) which stops the body decaying. Plastic caps are inserted under the eyelids before they are glued shut, and the jaw is stitched closed. Cosmetic work is then done to give the body a more restful appearance for viewing. 

Sources and Further Reading

https://www.britannica.com/topic/embalming

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3931544/

https://www.researchgate.net/publication/338116727_Embalming_-History_to_its_Recent_Advancements

Whorton, J. C., The Arsenic Century, (Oxford, 2010)

Asbestos: From Convenience to Cancerous

Asbestos is a naturally occurring mineral which is made up of fibrous crystals. It is highly heat resistant, and therefore was long considered to be a ‘Miracle Mineral’ that was used to create a plethora of heat resistant and fireproof items. Not only was it found in an array of building and construction materials, such as concrete, plaster, and insulation sprays, it was also present in many common household objects, such as irons, cooking mats, and heating elements. 

The Miracle Mineral

Alongside its use in construction, asbestos could also be spun into fabric. This was used to make fireproof materials which were then used in clothing and fire blankets. Asbestos curtains in particular became a huge asset for theatres. During the late 19th century, fires in theatres were a tragically common occurrence that killed thousands of people over the course of a few decades. As the majority of theatre fires broke out on stage (usually due to unprotected light sources), the introduction of asbestos stage curtains meant that the severity of theatre fires and the number of casualties dropped dramatically.

Asbestos fabric was also often used to cover cables on electrical appliances and in household wiring. Older homes in particular may still have wiring or pipework that is insulated with asbestos, and cable braiding found on old electrics present a particular problem because the cables are liable to fraying. Even until the 1980s many consumer products were made with asbestos, such as fire guards and cooking mats. Many plastics, particularly bakelite, were also made using asbestos to provide heat resistance in electrical items like radios and hairdryers.

Asbestos Cooking Mats

One of the most prevalent uses for asbestos during the 20th century was in gas masks. Up until the 1960s, gas mask filters were made from asbestos. During the second world war, 38 million gas masks were issued to civilians throughout the UK, all using asbestos filters. Fortunately, most of these masks were never needed, as Britain never suffered a gas attack. However, those who worked in the factories producing the masks breathed in millions of particles of asbestos, which ultimately caused numerous respiratory problems decades later. For example, Doris Trimble died from oesophageal cancer in 2008, which was caused by her work at a gas mask factory between 1941 and 1943. Her family was awarded £48,000 in compensation.

Even today, gas masks pose a health risk. Wartime gas masks are often included in educational kits for schools, and many people still have vintage gas masks in their homes, unaware of the danger if they are put on or even handled too vigorously. 

Early warnings Ignored

People first became aware of the dangers of asbestos during the early 1900s, when hospitals started reporting unusually high instances of early death in areas where asbestos mining was prevalent. As early as 1902, asbestos dust was included in a list of known harmful substances. However, these early findings were completely ignored, and asbestos continued to be mined and processed in enormous quantities. We now know that asbestos is a carcinogen; it causes mesothelioma, a cancer which mostly affects the lungs, and asbestosis, referring to long term scarring of the lungs.

In 1924, the first case of asbestosis was confirmed when 32 year old Nellie Kershaw was diagnosed with ‘asbestos poisoning’ by her local physician when she became too sick to work. Her employer, however, denied that asbestos was responsible for her illness, and refused to pay her a penny. Following her death, a formal inquest was launched and found that the development of asbestosis was irrefutably linked to inhalation of asbestos dust; 66% of those who worked with asbestos for 20 years or more suffered from asbestosis. The report led to the introduction of industry regulation of asbestos in 1931.

Further inquiries continued to demonstrate the long-term consequences of inhaling asbestos. A study of hospital records in England between 1919 and 1964 showed that over 50% of mesothelioma cases were caused by occupational exposure, or because the person in question lived with an asbestos worker. A further 16% of cases were in people who lived within half a mile of the just one asbestos factory. It was another 30 years until children exposed to asbestos from factories began to develop mesothelioma as adults.

Too Little Too Late

As with many other harmful substances, the government was slow to regulate the use of asbestos. Inspectors in 1975 found that many asbestos factories were not enforcing the 1931 regulations, and lax limitations on air quality control meant that by 1978, 1 in 10 asbestos workers were still developing cancers as a result of industrial exposure.There was no legislation protecting the general public from airborne asbestos until the late 1980s. Eventually, pressure from victims, the media, and members of parliament pressured the UK government to make asbestos completely illegal in 1998. 

Sources and Further Reading

http://www.stagebeauty.net/th-frames.html?http&&&www.stagebeauty.net/th-tsafe.html

https://spartacus-educational.com/spartacus-blogURL124.htm

https://en.wikipedia.org/wiki/Asbestos

https://www.eea.europa.eu/publications/environmental_issue_report_2001_22/issue-22-part-05.pdf/view

https://en.wikipedia.org/wiki/Nellie_Kershaw

https://www.mesothelioma.com/asbestos-exposure/products/consumer-products/

A Savage Mutilation: The Lobotomy Story

Highly controversial even in its heyday, lobotomy was just one of many experiments within the realm of psychiatry in the quest to find a cure for mental illness. Due to new discoveries about the functions of different areas of the brain at the end of the 19th century, new ideas about surgical treatments for mental maladies became popular from the 1880s right through to the 1950s.  

The idea that mental illness could be cured by operating on the frontal lobe was inspired by a study conducted on chimpanzees in 1935, which found that the removal of the prefrontal lobe caused a number of emotional changes in the primates. A neurologist from Lisbon, Egas Moniz, took the findings as inspiration to begin developing a similar treatment for his human patients. The procedure was picked up in the US by Walter Freeman and James Watts, who developed the standard surgical procedure which came to be known as lobotomy.

Destructive Surgery

Lobotomies were performed via two different methods. Moniz’s method involved cutting into the white matter of the prefrontal lobe using a whisk-like instrument. The instrument was moved back and forth, severing the nerve connections; the front of the brain would be effectively scrambled. In 1946, Freeman and Watts developed the trans-orbital lobotomy, where a pick was inserted through the eye socket to sever the nerve connections. Both techniques were extremely imprecise, and it was impossible to guarantee what the outcome of the procedure would be. Indeed, even neurosurgeons who promoted the use of lobotomies often reported at best a 50% ‘success’ rate.

Lobotomy tools owned by Walter Freeman

The procedure essentially worked by destroying the part of the brain that manages executive function. The prefrontal cortex manages our ability to differentiate good and bad, predict expectations based on our actions, our ability to suppress urges that might have socially unacceptable consequences. It also plays an important role in decision making, and our ability to express ourselves as individuals.

In short, lobotomy destroyed a patient’s personality and their ability to regulate their behaviour through cognitive decision making. This made them docile and pliable; in some psychiatrists opinions, the perfect patient. Lobotomies were therefore typically performed on patients who were considered difficult to care for, and the procedure was disproportionately carried out on women.

A Famous Case

One of the most famous cases of lobotomy was the one performed on Rosemary Kennedy, the eldest sister of President John F Kennedy. Though details of her life are unclear, it is thought that she may have had some form of learning disability, and may have been suffering from depression. Fearing that her somewhat erratic behaviour would embarrass the family and harm her siblings’ political careers, her father arranged for her to have a lobotomy. In 1941, when Rosemary was 23 years old, Watts and Freeman performed a transcranial lobotomy, inserting an instrument through an incision in her skull. In a disturbing example of the inaccuracy of the procedure, Watts explained; ‘We made an estimate on how far to cut based on how she responded.’ When Rosemary began to become incoherent, they stopped.

As one could expect, the procedure was not a success. Rosemary was left with the mental capacity of a two year old. She could no longer walk or speak properly, and she became incontinent. Like so many other women who were subjected to lobotomies, Rosemary spent the rest of her life hidden from the rest of the world.

Lessons Learned

Today, we recognise that lobotomy was an indefensible violation of patient rights. In retrospect, the procedure offered no curative or therapeutic value, and served only to pacify people who were too difficult to care for. It was plain that lobotomies caused massive and irreparable damage, and even those in the psychiatric professions viewed it with growing unease. Thankfully, it was one of the first procedures to be dropped in favour of drugs at the advent of the 1950s, but it should still serve as a potent reminder of the dangers of unchecked medical hubris.

Sources and Further Reading

https://en.wikipedia.org/wiki/Rosemary_Kennedy

Shorter, E., A History of Psychiatry, (1997)

Cruelty or Cure? The ECT Debate

Electro-convuslive therapy (better known as ECT) was introduced in 1938, and quickly became widely used in psychiatric hospitals as a new therapeutic treatment. It was found to be particularly useful in treating major depression, and ECT rapidly replaced the use of less safe drugs that had previously been used to induce seizures, such as Metrozol. However, the therapy was not without controversy, and ECT is still sometimes met with negative connotations to this day.

A Late 20th Century ECT Machine

Early Use

Dr Ugo Cerletti first discovered the potential of electroconvulsive therapy whilst conducting research on epilepsy during the 1930s. In 1935, he was appointed as chief of the Clinic for Nervous and Mental Disease at the University of Rome. He took on three assistants to continue his research into inducing convulsions, and in 1936 they discovered that delivering an electric current through electrodes placed on the temples of dogs would safely induce such convulsions. After conducting numerous experiments on animals, it was then successfully used to treat a patient suffering from psychotic symptoms. After eleven rounds of ECT (consisting of 100 volts for a fraction of a second), the man was able to return to a more or less normal life.

News of the successful treatment quickly spread, and by 1959 ECT had become the treatment of choice for depression throughout the US and Europe. Psychiatrists found that it was more effective than other physical therapies, and had relatively few side effects.

The Anti-Psychiatry Movement

However, as science and medicine made more and more progress, many people began to feel that doctors were lacking in empathy. With so many new drugs available to treat patients, some felt that psychiatry had become cold and clinical rather than caring. Chief among the critics was the idea that there was no such thing as mental illness; intellectuals such as Michel Foucault argued that madness was a social and cultural invention, and that psychiatry was simply a way of controlling people who didn’t behave the way society wanted. This mindset was exacerbated by the abuse of ECT in disreputable state asylums on non-compliant patients, and the use of ECT on patients who were unable to give informed consent.

Following growing public mistrust of ECT, American lawmakers began to strictly regulate the use of ECT in psychiatric hospitals, even going so far as to ban it in some states. The Scientology movement also used its considerable funds to spearhead a number of campaigns to have ECT completely outlawed. In the 1950s, Scientology had introduced ‘dianetics’ as an alternative to psychotherapy, and saw ECT as a dangerous threat. As court case after court case garnered mass media attention, the public was naturally alarmed – if ECT was as safe as doctors claimed, why were so many jurisdictions considering banning it?

Though many psychiatrists still argued that ECT was one of the most effective treatments for depression, the public was swayed by constant negative portrayals, not just from Scientology-funded court cases, but also in popular media. Films such as the 1975 adaptation of One Flew Over the Cuckoo’s Nest seemed to cement the idea that ECT was nothing but a cruel punishment. Most universities stopped including ECT in their training programmes, and by 1980 the use of ECT in psychiatric hospitals had dropped by about 39%.

Science Fights Back

Even against public opinion, advocates of ECT continued to push the empirical evidence , arguing that all the data showed that ECT was much more effective than any other treatment available. In 1985, a conference held by the National Institutes of Health concluded that universities should resume training in ECT. Many other countries had continued to employ the use of ECT, and American attitudes were beginning to look quite outdated. In 1990, a new report from the American Psychiatric Association recommended ECT not just for severe depression, but also for bipolar disorder and psychotic schizophrenia. The public, however, remained squeamish about what seemed like such a severe intervention, and the use of ECT reduced by roughly half as doctors began to favour less invasive treatments.

E.C.T Today

Modern doctors and psychiatrists still use ECT to treat severe depression when a patient has not responded to other treatments or medications. Unlike its past use, it is now generally seen as a last resort therapy, as advancements in cognitive based therapies and new drug innovations have improved the options and prospects for those experiencing mental health problems.

Sources and Further Reading

https://www.mind.org.uk/information-support/drugs-and-treatments/electroconvulsive-therapy-ect/about-ect/

https://en.wikipedia.org/wiki/Electroconvulsive_therapy

https://en.wikipedia.org/wiki/History_of_electroconvulsive_therapy_in_the_United_Kingdom
Shorter, E., A History of Psychiatry, (1997)

Something in the Air: How Tetraethyl Lead Poisoned the World

In 1925, General Motors found that they needed to improve their engine performance as a way to compete with Ford’s new and more powerful Cadillac. The octane boosting properties of Tetraethyl Lead (known as TEL) had been discovered in 1921 by General Motors’ chief chemist Thomas Midgley, who suggested adding it to petrol as an anti-knocking agent. This decision would have deadly long-term repercussions, with millions of people still suffering the effects of lead poisoning to this day.

A Well-Known Peril

Lead came into widespread use during the Roman era as a by-product of silver mining. Due to its abundance and low melting point, lead was found to be ideal for producing many common household items, including pots, wine urns, plates, and water pipes. However, as early as the first century AD, the toxic properties of lead were well-known; early symptoms of lead poisoning that various Roman sources describe included weakness, fatigue, nausea, and abdominal pain.

Throughout the centuries, the ill-effects of lead continued to be documented. In 1696, the Count of Wurttemberg banned the addition of lead to wine after an outbreak of lead poisoning, and in 1892 the first accounts of children dying from ingesting lead paint began to emerge. 

Even a mere decade before the introduction of lead to petrol, physicians were still documenting the poisonous effects of lead. In 1914, conclusive proof had been found that numerous childhood deaths had been caused by lead poisoning. Many people were therefore rightly worried that adding lead to petrol would cause lead oxide to be released into the air in huge quantities. 

‘No Evidence of Harm’

In response to these concerns, the Ethyl Corporation, which was owned by General Motors, funded a number of studies to try and convince the public that TEL in petrol was perfectly safe. One such study, conducted by the Bureau of Mines, reported that ‘animal studies indicated no risks to the public’. However, the report was widely criticised by a number of respected scientists.They argued that the number of animals used was too small, and the duration of exposure was too short to draw any reliable conclusions about TEL’s safety. 

The Ethyl Corporation also had the support of some health officials. Dr Emery Hayhurst of the Ohio Department of Health stated that from ‘observational evidence’ he could conclude that TEL was completely safe. However, few people at the time knew that Dr Hayhurst was also a paid consultant to the Ethyl Corporation, and an advisor to the Bureau of Mines.

Many reputable scientists, including Alice Hamliton, the US’s foremost expert on lead, argued that the risk to public health was perfectly clear. Not only that, but there were also several safer alternatives to lead. Ethanol had been demonstrated to be just as efficient as in experiments conducted by the US Navy in 1908. However, the Ethyl Corporation denied that alcohol was an alternative to TEL, and did everything they could to bury the evidence. Indeed, until 1965, all studies on TEL were funded by the Ethyl Corporation and General Motors.

In the end, the short term economic benefits of leaded petrol were prioritised over the risks of long term health problems, and leaded petrol became a mainstream commercial industry.

A fuel pump at an American gas station. Signs like this were once a common occurrence.

Why is Lead so Dangerous?

While lead is dangerous to both adults and children alike, children are more severely affected by lead poisoning. It has been shown to cause a huge number of developmental problems and cognitive disorders, and there is now no known safe threshold for exposure. For example, in his 1979 study, Needleman showed that the higher the lead content in children’s blood, the greater the negative impact on IQ. Later studies found that an estimated gain of around 5 points in IQ score was associated with the decline in average blood lead concentrations as leaded petrol was phased out.

Even at lower levels of exposure that were touted by petrol companies as safe, lead is known to cause numerous long term health problems. As well as diminished cognitive ability, lead exposure has also been linked to shortened attention span, dyslexia, attention deficit disorder, hypertension, kidney damage, and reproductive problems. Numerous studies conducted between 1943 and 1996 have confirmed the link between lead exposure and anti-social behaviour. More recent studies have suggested that the rate of degenerative brain diseases such as alziemers has also been increasing at a rate consistent with the number of adults who were exposed to lead as children.

The Truth Starts to Come Out

Though scientists had lost the initial battle against leaded petrol in 1926, many continued to doubt the claims that lead exposure was ‘normal and safe’ as the Ethyl Corporation claimed. In 1965 geochemist Clair Patterson published a paper showing that alongside the severe illness that was seen with exposure to lead, there were also a number of negative effects caused by constant low level poisoning. Once again, the Ethyl Corporation attempted to bury the evidence by attempting to bribe Patterson. When he refused, they ensured he was removed from professional boards, and he lost his contract with the Public Health Service.

Following Pattersons revelations, the government began to express more concern about the impact of lead exposure. In 1966, Patterson and a number of other scientists once again testified the dangers, and criticised the Public Health Service for relying on data provided by the Ethyl Corporation, which was undoubtedly biased. As a result, in 1970 Federal funds were allocated for the first time to study the impact of lead exposure in children, ending the monopoly on scientific data that the Ethyl Corporation had held since 1925. Herb Needleman conducted a groundbreaking study where he analysed the milk teeth of more than two thousand children and correlated their lead content with the childrens’ later cognitive development. His findings influenced the EPA’S clean air criteria in 1977, which recognised the extreme risk of brain damage occurring in individuals who showed no other symptoms of lead poisoning. His and many other studies proved beyond a doubt what scientists had been saying for decades; lead was demonstrably and devastatingly poisoning children.

Phasing Out

An inquiry led by Senator John Muskie ended the government’s complacency towards lead pollution, and steps began to be taken to actively try and reduce the concentration of lead in the air. With the introduction of the Clean Air Act in 1970, General Motors announced that they would be phasing out leaded petrol because it damaged the platinum needed in new catalytic converters. However, leaded petrol was not completely phased out in the US until 1995. 

In the UK, the government was slow to force industries to withdraw leaded petrol from use, as the production and export of TEL was a major economic contributor. Finally bowing to public and scientific pressure, the UK began to phase out leaded petrol  from 1987, when the government introduced a preferential tax rate on unleaded fuel. Leaded fuel was completely banned by the EU in 2000, a full seventy-five years after its introduction.

Sources and Further Reading

https://penelope.uchicago.edu/~grout/encyclopaedia_romana/wine/leadpoisoning.html

https://en.wikipedia.org/wiki/Lead_poisoning

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6454899/

https://www.nejm.org/doi/full/10.1056/nejm199001113220203

https://www.bbc.co.uk/news/business-40593353

The Radium Girls

The discovery of radium in 1898 by Marie Curie was followed by a craze for all things radium as people rushed to find commercial applications for this new and exciting element. During the 1910s, glow in the dark products took the public by storm, with glowing clocks and watch dials becoming extremely popular.  

A Glowing Opportunity

With the onset of the First World War, many women took up work in factories. Radium factories were seen as a particularly glamorous option, as the glowing paint was fun to work with and the positions were very well paid. After the US entered the war in 1917, the military entered into a contract with the Radium Luminous Materials Corporation to produce glowing watches for their soldiers. This caused a new influx of staff at the radium factories, who consisted of mostly teenagers and young women, as their hands were smaller and therefore better suited to the minute detail needed to paint the watch dials. However, what at first seemed to be a great opportunity soon turned out to be deadly.

Dangerous Disinformation

Even though the danger of radium poisoning was well known by the factory managers, the women employed to work with the glowing paint were not informed about it. In fact, they were deliberately misled to believe that the paint they were using was completely safe. Each painter applied the radium paint using a small camel hair brush, which had to be re-pointed every few strokes. To keep the brush tips sharp, management instructed the women to lick the brushes, causing them to ingest large quantities of the radioactive paint.

The factory floor at the United States Radium Corporation. Women working with radium paint were given no protective equipment.

The first person to publicise that radium was making the dial painters sick was a dentist from New Jersey. In 1924, Dr Theodor Blum came across a patient suffering from constant bleeding and infection of the jawbone. Making the link between the condition and his patient’s occupation as a dial painter, he dubbed the new phenomenon ‘radium jaw’. The following year, Dr. Frederick L. Hoffman found several more dial painters who were suffering from the same symptoms. It is now understood that the body treats radium as a substitute for calcium, and absorbs it into the bones and teeth. As the years went on, more and more women who had worked as dial painters were becoming sick and dying from anemia and bone cancer caused by radium poisoning. 

A Wall of Denial

Though study after study linked radium ingestion to the sickness the dial painters were suffering from, the companies tried a whole host of underhand tactics to silence the women and save the companies’ reputation. Doctors were encouraged to claim that the affected girls had died of syphilis in an attempt to discredit them. The company also claimed that they had hired ‘a great many people who were physically unfit to procure employment in other lines of industry.’ as an act of kindness. They insisted that the poisoned women had already been sick when they were hired, and that radium therefore couldn’t be to blame.

A Victory for Occupational Health

In 1928, five women were finally able to bring U.S Radium to court; Grace Fryer, Edna Hussman, Katherine Schaub, Quinta McDonald, and Albina Larice. The company had tried to delay the case as much as possible, as the women were all so sick that there was a good chance they would die before they reached trial.

News of the trial and the evidence both of the women’s suffering and the company’s underhand tactics caused a worldwide outrage. Realising that the jury would not find in their favour, U.S Radium settled the case, giving each woman $10,000 (around $150,000 today) plus another $600 a year. It was a miserly sum that did little to abate the suffering the women had endured. Even then, the company had the audacity to appeal against findings against them in other cases across the country, eventually going all the way to the Supreme Court in 1938. Fortunately, the Supreme Court refused to hear their case and the company was forced to accept liability for all the lives they had ruined.

Some lasting good did come of the case, however. As a result of the case, the right for workers to sue companies for damages was established, and the case set several important legal precedents that gave much greater protection for workers in the US. This included extending the statute of limitations, giving workers longer than the two years they had been allowed previously to find and report illnesses caused by their work. In 1941, the state of New Jersey passed a law making all industrial diseases eligible for compensation, and included radium necrosis as a recognised occupational disease.

Sources and Further Reading

https://interestingengineering.com/the-radium-girls-workers-who-painted-with-radium-and-suffered-radiation-exposure

https://s3.wp.wsu.edu/uploads/sites/1058/2016/09/Rowland-1994.pdf

https://en.wikipedia.org/wiki/Radium#:~:text=Radium%20is%20a%20chemical%20element,as%20the%20alkaline%20earth%20metals.

Moore, K., The Radium Girls, (2016)

Ergotism: The Mystery of St Anthony’s Fire

Ergot is a fungus that grows on rye and other cereal crops when conditions are unusually damp. Before the discovery of the fungus, ergot poisoning caused several outbreaks of illness that at the time were either unexplained or attributed to supernatural phenomena. Ergot poisoning has two distinct types of symptoms; convulsive ergotism presents with tingling sensations, spasms and insomnia. In gangrenous ergotism, parts of the body become swollen and inflamed, with painful skin eruptions which led to the condition becoming known as Holy Fire or St Anthony’s Fire. Eventually, loss of blood circulation can lead to entire limbs dropping off.

Faith Healing

During the 10th century, the Rhine Valley saw a particularly devastating outbreak of ergotism (though at the time it was unknown that it was the fungus that had caused the affliction). As thousands of people suffered from severe burning sensations, the affliction was likened to the sensation of hellfire, and it was assumed that the plague had been sent as divine punishment. Linking the symptoms to the suffering of St Anthony, relics and icons of the saint were used in hospitals run by Monks across Europe to heal sufferers. The fact that many did indeed recover was probably due to the better quality bread they would have received at the hospitals, removing the source of the poisoning and thus allowing the body to recover naturally.

Pilgrims suffering from ergotism (St Antony’s fire) approaching the infirmary in which the relics and bones of the saint are preserved. Oil painting by Ernest Board.

Mysterious Illnesses

Despite the connection often being made in 1676 between ergot tainted bread and outbreaks of St Anthony’s Fire, later physicians were often still nonplussed when cases presented themselves. In 1762, an entire family in a small village in England presented with severe leg pain; within a few days, all seven people had suffered some loss of limb. The mother lost her right foot and all the flesh on her left leg, the eldest daughter lost one leg below the knee, the middle daughter lost both legs below the knee, and the youngest daughter lost a foot. The sons, aged eight and four, lost both legs and both feet respectively. The youngest child, a four month old infant, died. At the time both the family, their physician and the local priest were unable to say what had caused their mysterious and unpleasant affliction. Given their symptoms, it is almost certain that they were suffering from ergotism.

The Salem Witch Trials

Several historians have suggested that ergotism may have been the cause of the symptoms exhibited by those who were targeted by the Salem Witch Trials between 1692 and 1693. Convulsive symptoms of ergotism, including crawling sensations on the skin, headaches, hallucinations, muscle contractions, vomiting, and psychosis all match symptoms that were reported in the original Salam records. Historians such as Linnda Caporael and Mary Matossian have also pointed out that the abundance of rye in the region coupled with an unusually wet winter could easily have caused an outbreak of the ergot fungus. However, most historians agree that social factors and mass hysteria are more important than biological causes for the events in Salem.

Modern Cases

Ergostim was very common throughout Europe in the Middle Ages, but in Britain it was relatively rare. Given its cool and damp climate, wheat was generally favoured over rye, and potatoes were a widespread supplement to the average diet after their introduction in the 1590s.

Though now incredibly rare, outbreaks of ergotism do sometimes occur. In 1951, the French town of Pont-Saint-Esprit suffered an outbreak that resulted in five deaths, and an outbreak also occurred in Ethiopia as recently as 2001.

Sources and Further Reading

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2069953/?page=2

https://en.wikipedia.org/wiki/Salem_witch_trials

https://en.wikipedia.org/wiki/Ergotism

https://www.sciencedirect.com/topics/medicine-and-dentistry/ergotism

https://www.nationalgeographic.com/history/magazine/2018/11-12/ergotism-infections-medieval-europe/

Stewart, A., Wicked Plants; The A-Z of Plants that Kill, Maim, Intoxicate and otherwise Offend, (2016)

Morris, T., The Mystery of the Exploding Teeth and other Curiosities from the History of Medicine, (2018)

Nerves of Steel: The Doctor who Removed his own Appendix

On 29th April 1961, Leonid Rogozov was faced with a decision that nobody would want to make. He had developed acute appendicitis, and knew that his appendix would have to be removed or he would almost certainly die. There was just one problem; he was the only doctor within 1600 miles.

Complete Isolation

Leonid Rogozov was part of a Russian Antarctic Expedition sent to establish a new research base, and by February he and his team had settled in to wait out the long polar winter. In a report that he wrote after his experience, Rogozov stoically states that he ‘did not feel well’ on the morning of April 29th. In his diary he is more forthright; “It hurts like the devil!”, he wrote. “A snow storm whipping through my soul, wailing like 100 jackals.” Rogozov had no doubt he was suffering from appendicitis, and knew that his options were limited. Though there were other foreign stations nearby that might have offered assistance, none of them had a plane or any other means of getting him to another doctor, and an incoming blizzard ruled out the possibility of travelling too far. By the evening, Rogozov knew he had no choice – he would have to perform the surgery to remove his appendix himself.

Problem Solving

Rogozov planned his surgery meticulously. With the assistance of meteorologist Artem’yev and driver Teplinskiy, he established a set-up where he could lay in a semi reclining position, so he could see what he was doing. He also had Teplinskiy hold a mirror, though Rogozov preferred to rely on touch, as the reversed image in the mirror added a further layer of complication.

Leonid Rogozov performs an appendectomy on himself.

Rogozov considered every aspect of the surgery, and even made sure to instruct his assistants on what to do if he lost consciousness during the operation. Then, using only a local anesthetic of novocaine, Rogozov cut into his own abdomen. With no other anesthetic possible, he had to take regular breaks to cope with the nausea and vertigo. After two grueling hours, Rogozov had successfully removed his appendix, and not a moment too soon. When he saw the darkened stain on his appendix, he realised it had been just a day away from rupturing.

After a weeks difficult recovery the wound had fully healed, and after two weeks Rogozov was able to carry out his normal duties. The surgery had been a complete success.

What Next?

When Rogozov returned to Russia in 1962, he was praised as a national hero. Rogozov, however, was not interested in being a celebrity. He immediately returned to completing his MD thesis on surgery to treat throat cancer, which he presented in 1966. He worked as a doctor and surgeon for the rest of his life, until his death from lung cancer in 2000 at the age of 66.

Polar Exploration Today

Today, researchers who will be stationed in Antarctica usually have their appendixes removed as a matter of course to prevent such an emergency arising. However, those working in such a remote and hostile environment still have to face extraordinary challenges. Even today, hypothermia, snowmobile accidents, and even fire are all very real dangers in the polar wastes. Though aided by state-of-the-art technology and rigorous training, modern researchers meet these challenges with the same courage and presence of mind that Rogozov demonstrated to survive against the odds.

Sources and Further Reading

https://www.bbc.co.uk/news/magazine-32481442

http://corpus.leeds.ac.uk/serge/rogozov-1964.pdf

https://en.wikipedia.org/wiki/Leonid_Rogozov

https://www.bmj.com/content/339/bmj.b4965

Haddelsey, S., Icy Graves; Exploration and Death in the Antarctic, (Gloucestershire, 2018)

The Victorians and Arsenic

The advent of the industrial revolution at the beginning of the 19th century saw new manufacturing processes which were able to turn out products on a never before seen scale. 

Arsenic was a by-product of the mining and smelting industry, and began to be produced in enormous quantities during the 1800s. Arsenic was already a well known poison, having been the weapon of choice for many murderers for several centuries. However, with the new influx of supply, people began to find other uses for arsenic, and it slowly invaded almost every aspect of Victorian life.

Accidents Happen

Throughout most of the 19th century, the sale of arsenic was poorly regulated, and anyone was able to buy or sell it. Arsenic was most commonly sold and used as rat poison, and due to its cheapness (half an ounce cost a penny, the same as a cup of coffee) and ready availability, arsenic could be found in almost every shop, home, and factory in Britain.

An unscrupulous chemist selling a child arsenic and laudanum. Wood engraving after J. Leech.

Arsenic was often stored around the home in unlabelled containers, and could be regularly found in the pantry or near other foodstuffs. However, white arsenic bore an unfortunate resemblance to a myriad of other widely used powders, including sugar, flour, chalk, plaster, and starch . Mistakes were inevitable; there are countless examples of arsenic accidentally being baked into cakes and other edibles. In the space of just two years (1837-39), there were 506 accidental deaths reported in Britain which were caused by inadvertently ingesting arsenic. Countless others were made horrifically ill, with symptoms including headaches, severe diarrhea, vomiting, stomach pains, and convulsions.

Tricks of the Trade

Accidents like this were even more prevalent when coupled with the actions of unscrupulous traders and manufacturers. The act of switching ingredients with cheaper substitutes (known as adulteration) was a common practice during the Victorian era. For example, confectioners often substituted powdered sugar for plaster of paris, which was considerably cheaper. However, in one shocking case that became known as the Bradford Incident, arsenic was accidentally used instead of plaster. The tainted sweets killed more than twenty people (including several children) and caused violent illness in dozens more.

Arsenic did not even need to be ingested in order to kill. Breathing in powdered arsenic was also responsible for countless poisonings. Makers of artificial flowers were in particular danger, as the green dye (made from copper arsenate) used to colour leaves was often rubbed onto waxed fabrics by hand. Not only did those employed in flower workshops suffer painful skin irritation, they also ended up inhaling large amounts of arsenic through the dust that was constantly diffused into the air. Those who weren’t killed by the arsenic were often forced to take extended periods of leave from work as their injuries became increasingly painful and disabling.

Everyday Arsenic

Despite the known dangers of accidental poisoning, and the well publicised examples of such incidents occurring, arsenic tainted products could be found everywhere. Perhaps the most well known use of arsenic is the use of ‘Scheele’s Green’. The colour was the height of fashion during the 1800s, and copper arsenate was found to give a much more vibrant green than other substances like malachite or verdigris. Arsenical dyes were used everywhere; not only in artificial flowers, but also in paints which were then used to colour bedroom walls and childrens toys, in paper used to wrap confectionery and cover books, and in fabrics used to make dresses, stockings, and gloves.

Copper arsenate dye was also used in wallpaper, and perhaps most famously in those designed and manufactured by William Morris. Though by the 1850s there was growing suspicion from consumers, physicians, and scientists alike that green wallpapers were making people sick, Morris dismissed the claims. As the belief was that it was arsenic dust that made people ill, and that only cheap wallpaper could flake and produce such dust, manufacturers argued that there was nothing to fear from their good quality wallpapers. It was not until 1891 that it was conclusively demonstrated that fungus in wallpaper paste would metabolise the arsenic in wallpaper and release arsenical gas; it was this gas, rather than dust, that had been covertly afflicting people for decades.

The Power of Public Opinion

The government was highly reluctant to intervene in the use and sale of arsenic, or impose any strict regulations that might have ensured consumers safety. The imposition of regulation into citizens’ private lives was seen as a violation of individual rights. Politicians, traders, and indeed many consumers felt that people should be free to buy and sell as they wished. If the public didn’t like something, they argued, they could simply stop buying it. This is precisely what they did; by the end of the century public fear of arsenic poisoning had become so great that consumers heavily favoured brands who purported to be ‘arsenic free’. Realising that consumers were finally rejecting arsenic tainted products, manufacturers followed public opinion, and the widespread poisoning seen during the 1800s became largely a thing of the past.

Sources and Further Reading

https://crosscut.com/2010/09/arsenic-victorians-secret

https://hyperallergic.com/329747/death-by-wallpaper-alluring-arsenic-colors-poisoned-the-victorian-age/

https://en.wikipedia.org/wiki/Arsenic_poisoning#:~:text=Arsenic%20poisoning%20is%20a%20medical,watery%20diarrhea%20that%20contains%20blood.

Whorton, JC., The Arsenic Century: How Victorian Britain was Poisoned at Home, Work and Play, (New York,  2010)

Haslam, JC., ‘Deathly décor: a short history of arsenic poisoning in the nineteenth century’ Res Medica, Vol 21, Issue 1,  2013