By the turn of the 20th century, a new wave of science and expert knowledge swept across medicine. Among elite institutions, medical education began shifting toward the science-based curricula we’d recognize today in the 1890s. Chemists (mostly in Germany) began to isolate and extract singular compounds responsible for the therapeutic effects of old remedies. So too did the chemist’s other German patrons, the synthetic dye industry, lend to medicine chemical compounds like antifebrin, an early antipyretic (antifever) drug. Last, the microbial theory of disease (germ theory) and the isolation of microorganisms responsible for illness gave science a new, deeper understanding of the causes – and prophylaxes – for disease.
Empowered by an understanding that the unseen had a correlative effect to human health, scientists and physicians mounted a sustained campaign beginning in the early 1880s to clean up the US Drug supply. Against the massive profits of patent medicine, they made little headway. It would take the likes of Upton Sinclair and other muckraking journalists to shine a light on the dire situation of the nation’s food and drug supply. The result was the passage of the 1906 Pure Food and Drug Act.
The 1906 Act delineated dangerous drugs and outlawed the sale of poisonous patent medicines. The US Bureau of Chemistry (USBC) would test such patent medicines to make sure their contents were safe for human use. (Ultimately, the USBC gave way to the modern FDA). These established labelling criteria for many drugs, although to our modern sensibilities these would be altogether lacking. The Act had an immediate and intense cooling effect on patent medicine. Estimates vary, but between the Act’s passage in 1906 and 1908 when enforcement began, the number of patent medicines available in the US may have fallen by a third. The advertisements, salesmen, and colorful trade cards began to disappear.
The Act started as relatively unpopular with ordinary Americans. Most Americans at the time believed the self-treatment was a right, much in the same vein as freedom of speech. But new drugs and vaccines, many of them magic bullets that eliminated tuberculosis, diphtheria, typhus, pertussis, and more, lent strength to the argument that physicians had the power to effectively provide care.
Mid-Century and the Emergence of Defined OTC and Prescription Drugs Categories
In the 1920s, American pharmaceutical companies rode a wave of popular consumerism as normalcy returned from the war years. The “Roaring 20s” saw the popularization of cars and radios in the US and shifted how and where advertising occurred. Travelling medical shows became less popular, replaced with billboards, radio ads, and a massive number of print advertisements in magazines, serials, and newspapers. Ads increasingly relied on what we’d call early consumer psychology to condition human instinct to purchase certain brands.
For pharmaceuticals in the 1920s and 30s, scientific formulation and testing of chemical compounds increased the efficacy of new treatments. These new drugs offered near miraculous remedies to conditions that killed in previous decades. The mass production of drugs like insulin, sulfas, and barbiturates remain in use today, albeit in new and better forms. Alongside these highly efficacious drugs might still be patent medications, now labeled and purer, but nonetheless of often questionable efficacy.
In 1938, the paradigm shifted again for US advertisers. In that year, following a mass accidental poisoning of patients with elixir Sulfanilamide (sulfanilamide dissolved in diethylene glycol), the US Government passed the Federal Food, Drug, and Cosmetic Act. This act did several things. First, it puts into process the early form of drug testing. The subsequent Wheeler-Lea Act granted the Federal Trade Commission (FTC) the powers to regulate the advertising of products that the FDA regulated. The US Government now strengthened its claim to police the claims and portrayal of drugs to the public. Almost unbelievably to our modern sensibilities, the act left the decision to allow manufacturers to decide whether a new drug might be available over the counter or by prescription.
By the 1940s, the FDA had identified nearly two dozen OTC drugs it considered potentially dangerous to consumers. Placed on hold by World War II, in 1951, Congress amended the food and drug act to create the bifurcated environment that we see now. Prescription drugs formed a more powerful and specialized (and potentially dangerous) formulations. OTC medicine could be labeled in such a way to assure it could be used safely by patients. Such changes took a major swipe at the idea that self-treatment was a right. Rather, the most efficacious treatments would be mediated through a prescription pad and the trained hand of a physician.

This ultimately had two effects. First, deaths and hospitalizations from OTC drugs plummeted. Second, a very large percentage of US spending on pharmaceuticals now depended on exposing and influencing doctors, not their patients. Over the course of the 1950s, advertisers cut back on radio, newspaper, and new television advertising and instead focused on reaching doctors at conferences and medical journals. Early forms of pharma reps emerged in limited numbers to engage doctors personally soon after this change.
While prescription medication disappeared from the public eye, OTC medicine followed the transformation of advertising in the mid-century. Driven by Ogilvy, Benson & Mather, advertising changed from a product focus to a brand and customer focus. In the years following World War II, major US manufacturers began competing on a global scale for the first time and began developing the discipline of brand management. As developed by Ogilvy, (and soon copied many other agencies), each product took on a brand with identifiable properties that captured both the emotional and value propositions of the product. It wasn’t enough to know about Aspirin. Customers needed to know about Bayer Aspirin, it’s distinctive yellow and brown color scheme, to know it was the go-to medication for your aches and pains, and to know “Bayer Works Wonders.”
Part of the Ogilvy revolution, particularly in the 1950s and 1960s, leveraged “certainty.” A flow of public opinion research, focus groups and psychologists began to develop a science around consumer decision-making and exploring the emotional factors influencing purchasing decisions. These ultimately led to the AIDA model and other so-called rules of thumb for brands as they attempted to prompt a purchase or retain a customer. But it also infused data into marketing, something that has guided branding and creative at companies and agencies ever since. These were quickly applied to OTC medications.
In the decades between the Pure Food and Drug Act (1906) to the Bayer Works Wonders campaign (1965), the shape of pharmaceutical advertising had completely changed. What began as print, radio, and billboards had transformed. Radio, television, and the emergence of the automobile changed where and how advertising occurred. After 1951, manufacturers and advertisers now tackled two classes of drug: prescription drugs and OTC drugs. Whereas in the 1920s the vast majority of pharmaceutical marketing targeted patients, by the mid-60s, the majority of ad spending aimed to influence doctors, pharmacists, and hospitals.
In the decades after, the industry would undergo rapid change.
Leave a comment