COMPLEXITY

By: Santa Fe Institute
  • Summary

  • The official podcast of the Santa Fe Institute. Subscribe now and be part of the exploration!
    2019-2024 Santa Fe Institute
    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • Nature of Intelligence, Ep. 4: Babies vs Machines
    Nov 6 2024

    Guests:

    • Linda Smith, Distinguished Professor and Chancellor's Professor, Psychological and Brain Sciences, Department of Psychological and Brain Sciences, Indiana University Bloomington
    • Michael Frank, Benjamin Scott Crocker Professor of Human Biology, Department of Psychology, Stanford University

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

    Talks:

    • Why "Self-Generated Learning” May Be More Radical and Consequential Than First Appears by Linda Smith
    • Children’s Early Language Learning: An Inspiration for Social AI, by Michael Frank at Stanford HAI
    • The Future of Artificial Intelligence by Melanie Mitchell

    Papers & Articles:

    • “Curriculum Learning With Infant Egocentric Videos,” in NeurIPS 2023 (September 21)
    • “The Infant’s Visual World The Everyday Statistics for Visual Learning,” by Swapnaa Jayaraman and Linda B. Smith, in The Cambridge Handbook of Infant Development: Brain, Behavior, and Cultural Context, Chapter 20, Cambridge University Press (September 26, 2020)
    • “Can lessons from infants solve the problems of data-greedy AI?” in Nature (March 18, 2024), doi.org/10.1038/d41586-024-00713-5
    • “Episodes of experience and generative intelligence,” in Trends in Cognitive Sciences (October 19, 2022), doi.org/10.1016/j.tics.2022.09.012
    • “Baby steps in evaluating the capacities of large language models,” in Nature Reviews Psychology (June 27, 2023), doi.org/10.1038/s44159-023-00211-x
    • “Auxiliary task demands mask the capabilities of smaller language models,” in COLM (July 10, 2024)
    • “Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model,” in Cognitive Science (First published: 14 May 2024), doi.org/10.1111/cogs.13448
    Show More Show Less
    39 mins
  • Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?
    Oct 23 2024

    Guests:

    • Tomer Ullman, Assistant Professor, Department of Psychology, Harvard University
    • Murray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMind

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
    • The Technological Singularity by Murray Shanahan
    • Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray Shanahan
    • Solving the Frame Problem by Murray Shanahan
    • Search, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard Southwick

    Talks:

    • The Future of Artificial Intelligence by Melanie Mitchell
    • Artificial intelligence: A brief introduction to AI by Murray Shanahan

    Papers & Articles:

    • “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)
    • “Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833
    • “Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264
    • “Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8
    • “Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399
    • “Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024),
    • “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422
    Show More Show Less
    45 mins
  • Nature of Intelligence, Ep. 2: The relationship between language and thought
    Oct 9 2024
    Guests: Evelina Fedorenko, Associate Professor, Department of Brain and Cognitive Sciences, and Investigator, McGovern Institute for Brain Research, MITSteve Piantadosi, Professor of Psychology and Neuroscience, and Head of Computation and Language Lab, UC BerkeleyGary Lupyan, Professor of Psychology, University of Wisconsin-MadisonHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn • BlueskyMore info:Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellDeveloping Object Concepts in Infancy: An Associative Learning Perspective by Rakison, D.H., and G. LupyanLanguage and Mind by Noam ChomskyOn Language by Noam ChomskyTalks: The Future of Artificial Intelligence by Melanie MitchellThe language system in the human brain: Parallels & Differences with LLMs by Evelina Federenko Papers & Articles:“Dissociating language and thought in large language models,” in Trends in Cognitive Science (March 19, 2024), doi: 10.1016/j.tics.2024.01.011“The language network as a natural kind within the broader landscape of the human brain,” in Nature Reviews Neuroscience (April 12, 2024), doi.org/10.1038/s41583-024-00802-4“Visual grounding helps learn word meanings in low-data regimes,” in arXiv (v2 revised on 25 March 2024), doi.org/10.48550/arXiv.2310.13257“No evidence of theory of mind reasoning in the human language network,” in Cerebral Cortex (December 28, 2022), doi.org/10.1093/cercor/bhac505“Chapter 1: Modern language models refute Chomsky’s approach to language,” by Steve T. Piantadosi (v7, November 2023), lingbuzz/007180“Uniquely human intelligence arose from expanded information capacity,” in Nature Reviews Psychology (April 2, 2024), doi.org/10.1038/s44159-024-00283-3“Understanding the allure and pitfalls of Chomsky's acience,” Review by Gary Lupyan, in The American Journal of Psychology (Spring 2018), doi.org/10.5406/amerjpsyc.131.1.0112“Language is more abstract than you think, or, why aren’t languages more iconic?” in Philosophical Transactions of the Royal Society B (June 18, 2018), Published:18 June 2018, doi.org/10.1098/rstb.2017.0137“Does vocabulary help structure the mind?” in Minnesota Symposia on Child Psychology: Human Communication: Origins, Mechanisms, and Functions (February 27, 2021), doi.org/10.1002/9781119684527.ch6“Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks,” in Journal of Vision (December 2021), doi.org/10.1167/jov.21.13.13“Appeals to ‘Theory of Mind’ no longer explain much in language evolution,” by Justin Sulik and Gary Lupyan“Effects of language on visual perception,” in Trends in Cognitive Sciences (October 1, 2020), doi.org/10.1016/j.tics.2020.08.005“Is language-of-thought the best game in the town we live?” in Behavioral and Brain Sciences (September 28, 2023), doi:10.1017/S0140525X23001814“Can we distinguish machine learning from human learning?” in arXiv (October 8, 2019), doi.org/10.48550/arXiv.1910.03466
    Show More Show Less
    38 mins

What listeners say about COMPLEXITY

Average customer ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.