Perplexity’s Deep Research
Perplexity has released their own version of Deep Research , free for everyone, but Pro subscribers like myself have unlimited queries! No paying the $200/month for OpenAI’s Pro subscription.
Yesterday, I asked my first question:
What would be possible outcomes of applying constructive math to quantum physics to remove the need for creating infinities?
I deliberately didn’t describe constructive math and didn’t mention the problem of renormalization . I have some rough (very rough) knowledge in this area, so I know enough enough to evaluate that the output seems on track, but not enough to judge the quality of the output. Perplexity ran with it and here’s the summary:
The application of constructive mathematics to quantum physics represents a paradigm shift in addressing the persistent problem of infinities in quantum field theory (QFT). By re-examining foundational mathematical structures—from arithmetic systems to spacetime discretization—researchers propose novel frameworks that inherently avoid divergences. Key outcomes include the development of finite quantum theories (FQT) based on discrete algebraic systems, rigorous constructions of 4D QFTs via renormalization group methods, and the integration of non-Diophantine arithmetic to ensure convergent integrals. These approaches not only circumvent the need for renormalization but also align theoretical models with the discrete nature of quantum phenomena, potentially bridging gaps between QFT and quantum gravity.
OK, Deep Research was able to pick up on my intent (great!), but quickly ran far ahead of my knowledge in this area.
So I wanted to wade into something I was more familiar with, knowing that if there were things I didn’t know, I could quickly get up to speed: What were the root causes of the First AI Winter?
AI Winters, if you’re not familiar with them, were periods in the 70s and 90s where general disillusionment with AI led to decreased funding for research. Today, some people claim we’ll have a third AI winter, but I suspect that is not true, or if it is, it will be after AI is deeply embedded in most aspects of our lives.
So without further ado ...
The Root Causes of the First AI Winter (1974–1980): A Comprehensive Analysis
The First AI Winter, spanning from 1974 to 1980, remains a pivotal chapter in the history of artificial intelligence. Characterized by dwindling funding, waning public enthusiasm, and a crisis of confidence in AI’s potential, this period was shaped by a confluence of technological, institutional, and conceptual challenges. Below, we dissect the root causes of this stagnation, synthesizing insights from historical records, academic critiques, and technological limitations that collectively precipitated the downturn.
Overambitious Expectations and Unfulfilled Promises
The genesis of the First AI Winter lies in the stark disparity between the field’s early promises and its tangible achievements. During the 1950s and 1960s, AI pioneers such as Herbert Simon and Marvin Minsky made bold predictions about machines achieving human-like intelligence within decades. Simon famously declared in 1957 that “there are now in the world machines that think, that can learn and that can create,” with capabilities poised to expand rapidly.source source . These pronouncements, amplified by media hype, set unrealistic benchmarks for progress.
By the early 1970s, however, it became evident that AI systems struggled with basic tasks requiring perception, contextual understanding, and adaptability. While programs like SHRDLU could manipulate blocks in a simulated environment or solve algebra problems, they operated within narrowly defined “toy” domains.source source The inability to scale these systems to real-world applications eroded confidence among funders and policymakers. This disillusionment was exacerbated by Moravec’s paradox, which highlighted the incongruity between AI’s proficiency in abstract reasoning and its failure to replicate rudimentary human skills like object recognition or motor coordination.source .
Technological Limitations: Hardware and Data Constraints
A critical bottleneck during this era was the inadequacy of computational infrastructure. Early AI research relied on machines with limited memory and processing power, rendering complex tasks like natural language processing or image recognition computationally intractable.source source For instance, the Speech Understanding Research program at Carnegie Mellon University, funded by DARPA, faced insurmountable hurdles in the 1970s due to hardware constraints8. Similarly, neural networks—though theoretically promising—remained impractical without the computational resources to train multi-layered models.source
Compounding this issue was the scarcity of digitized data. Machine learning algorithms, even in their nascent forms, required extensive datasets to identify patterns and generalize effectively. The pre-internet era lacked the infrastructure to collect, store, and process such data, stifling progress in data-driven approaches.source source
The Lighthill Report: A Catalyst for Institutional Withdrawal
In 1973, the British government commissioned mathematician Sir James Lighthill to evaluate the state of AI research in the UK. His report, Artificial Intelligence: A General Survey, delivered a scathing indictment of the field. Lighthill argued that AI had failed to deliver on its “grandiose objectives” and criticized its focus on “toy problems” that bore little relevance to practical applications.source source source (pdf) He identified the combinatorial explosion—the exponential growth of possible solutions as problem complexity increased—as a fundamental flaw in AI algorithms, rendering them ineffective for real-world tasks. source source
The report’s impact was immediate and profound. The UK government withdrew funding for AI research, leading to the closure of all but a few university programs (e.g., Edinburgh, Essex, and Sussex). source source This decision reverberated globally, emboldening skeptics and prompting other nations to reassess their investments. The 1973 BBC debate between Lighthill and AI advocates like Donald Michie and John McCarthy further polarized public opinion, cementing the narrative of AI as a field mired in overpromise. source source
Shifts in Funding Priorities and Institutional Politics
The withdrawal of UK funding coincided with broader shifts in research priorities. In the United States, DARPA—a major patron of AI—begn (sic) redirecting resources toward “mission-oriented” projects with clearer military applications. source source This pivot reflected growing frustration with AI’s lack of measurable ROI, particularly after the collapse of the Speech Understanding Research program. source Similarly, the failure of machine translation initiatives in the 1960s had already primed policymakers to view AI as a risky investment. source
Academic institutions faced parallel pressures. As funding dried up, AI departments struggled to retain talent and infrastructure. The ACM’s Special Interest Group on Artificial Intelligence (SIGART), however, reported a counterintuitive surge in membership during this period, suggesting that grassroots interest persisted even as institutional support waned. source This dichotomy underscores the complex interplay between funding cuts and intellectual vitality during the First AI Winter.
Conceptual and Methodological Stagnation
The symbolic AI paradigm, dominant in the 1960s and early 1970s, prioritized rule-based systems and logical reasoning. While this approach yielded early successes in domains like theorem proving, it proved ill-suited for tasks requiring flexibility or real-world knowledge. source source The limitations of expert systems—rigid, domain-specific tools reliant on hand-coded rules—became increasingly apparent, foreshadowing their eventual decline in the Second AI Winter. source source
Meanwhile, connectionist approaches (e.g., neural networks) languished due to theoretical and practical barriers. The absence of backpropagation algorithms and computational power stifled progress, leaving neural networks sidelined until their resurgence in the 1980s. source [source](https://cacm.acm.org/opinion/ there-was-no-first-ai-winter/) This methodological stagnation left the field without a viable path forward, exacerbating the crisis of confidence.
Conclusion: Lessons from the First AI Winter
The First AI Winter emerged from a perfect storm of overhyped expectations, technological immaturity, institutional skepticism, and methodological rigidity. Its legacy offers enduring lessons for contemporary AI research:
- Balancing Ambition with Pragmatism: Unrealistic promises risk alienating stakeholders and inviting backlash.
- Infrastructure as a Foundation: Advances in hardware and data availability are prerequisites for algorithmic breakthroughs.
- Institutional Resilience: Diversifying funding sources and aligning research with practical applications can mitigate the impact of political shifts.
While the First AI Winter temporarily stifled progress, it also catalyzed a maturation of the field, paving the way for the neural network revolution and the data-driven approaches defining modern AI. source source Understanding its root causes remains essential for navigating future cycles of hype and disillusionment.
Conclusion
This could be the end of homework. The above isn’t innovative. It provides no new insights, but it’s far above “high school” level writing.
I felt there were not enough citations and using Wikipedia as a primary source is not great, so I’d approach this with caution. And as others have mentioned about Deep Research tools, the best research is paywalled, so these tools can’t get to it. That’s a shame, but nonetheless, this is a fantastic tool. Have fun with it!