We aim to curate high-quality events for anyone enthusiastic about AI! We do the heavy lifting for you, so you can find what you need with ease! Event details can be found after the overview.
Check out our website https://pioneeringminds.ai/ for more!
TOP Events Overview
NYC IRL: Learn & Meet Fellow Folks working on AI!
[NEW]
Apr.5 19:30-22:30 EST | NYC 🗽 AI Coworking Day + Panel on State of AI Funding w/ Index Ventures, Primary VC & Beat Ventures. Hosted by The AI Furnace. In Person RSVP.Apr.10 19:30-22:30 EST | NYC AI Users - AI Tech Talks, Demo & Social: AI in Music and Autonomous Agents. Hosted by David Cunningham @ New York AI Users. In Person RSVP.
[NEW]
Apr.24 18:00-22:00 EST | AI Tinkerers NYC April 2024 Meetup. Hosted by AI Tinkerers. In Person RSVP.
Application Series: Leveraging AI to solve your issues!
Apr.02 17:30-21:00 EST | ML/AI Conversations: AI for Meta-Optics and Deep Fakes Detection in KYC. Hosted by ML/AI Conversations. In Person RSVP
[NEW]
Apr.03 12:00-13:00 EST | Open House: AI Strategies and Roadmap: Systems Engineering Approach to AI Development and Deployment. Hosted by MIT. VirtualApr.03 12:00-12:30 EST | Proactive and collaborative AI to complete the bench-to-bedside loop in healthcare. Hosted by MIT. Virtual
Apr.09 12:30-13:30 EST | CITP Seminar: Molly Crockett: Producing More While Knowing Less: The Epistemic Risks of AI. Hosted by Princeton University. Virtual.
[NEW]
Apr.10 13:00-14:00 EST | Large Language Models and ChatGPT: What They Are and What They Are Not. Hosted by The Institute for Experiential AI, Northeastern University Hybrid RSVP.Apr.11 12:00-12:30 EST | AI and accessibility: Opportunities and challenges. Hosted by MIT. Virtual
[NEW]
Apr.24 13:00-14:00 EST | Giving AI Some Common Sense. Hosted by The Institute for Experiential AI, Northeastern University Hybrid RSVP.
Deep Dive: Neuroscience, Engineering and AI
Apr.03 18:30-19:45 | Stavros Niarchos Foundation Brain Insight Lecture: Memory as Narrative Power. Hosted by Zuckerman Institute, Columbia University. Virtual
[NEW]
Apr.08 16:00-17:00 | Robust Statistics in High Dimension Hosted by Yale Institute for Foundations of Data Science, Yale University. Virtual[NEW]
Apr.16 12:30-13:30 EST | CITP Seminar: Juan Carlos Perdomo – The Relative Value of Prediction. Hosted by Princeton University. Virtual.
NYC 🗽 AI Coworking Day + Panel on State of AI Funding w/ Index Ventures, Primary VC & Beat Ventures
Time: Apr.5 19:30-22:30 EST
In Person: RSVP
The AI Furnace is NYC's largest and most active AI community started by AI founders, Angela Mascarenas and Hamza Zaveri. The AI Furnace is hosting a coworking day in NYC for AI Founders + Operators.
Agenda:
Arrive + Cowork
11AM: Panel with VCs: State of AI Funding with Index Ventures (Jahanvi Sardana), Primary VC (Tobias Citron) and Beat Ventures (Colin Rogister)
~12:30-1PM: Grab lunch with other founders + operators building out of NYC (lunch not included)
NYC AI Users - AI Tech Talks, Demo & Social: AI in Music and Autonomous Agents
Time: Apr.10 19:30-22:30 EST
In Person: RSVP
Featuring Speaker:
Alisha Outridge, CTO, Byte & Chord, Faculty Member in Brown University
Shanif Dhanani, Founder, Locusive
Join Alisha Outridge for a captivating discussion about "AI's Role in Music's Next Act" where the future of music intersects with the cutting edge of artificial intelligence & let's explore what a post-AGI world of music looks like.
Shanif Dhanani will talk about Building Autonomous LLM Agents For Business. This talk will go over some of the common design patterns, challenges with implementation, and practical suggestions for building an autonomous agent for businesses and their employees
AI Tinkerers NYC April 2024 Meetup
Time: April.24 18:00-22:00 EST
In Person: RSVP
AI Tinkerers is a meetup designed exclusively for practitioners who possess technical, machine learning, and entrepreneurial backgrounds and are actively building and working with foundation models, such as large language models (LLMs) and generative AI. If you’re deeply passionate about creating LLM-enabled applications, have hands-on experience in building such systems, and are eager to connect with like-minded individuals who share your level of commitment, then this group is the perfect fit for you. With AI Tinkerers meetups taking place in multiple cities, we cater to a dedicated community of practitioners.
Who is this for?
We’re not “AI Enthusiasts”, we are AI Tinkerers. The core essence of AI Tinkerers lies in active collaboration surrounding early-stage discovery and innovation, which requires a high degree of experimentation, vulnerability, openness to sharing challenges and learnings, and collaboration among individuals with a shared level of expertise. This unique environment allows us to push the boundaries of what’s possible with AI and LLMs while maintaining a strong sense of camaraderie and mutual support.
ML/AI Conversations: AI for Meta-Optics and Deep Fakes Detection in KYC
Time: Apr.02 17:30-21:00 EST
In Person: RSVP
Featuring Speaker:
Mikhail Shalaginov, Co-Founder of 2Pi Optics & Research Scientist at MIT
Konstantin Simonchik, Chief Scientific Officer, Co-founder of ID R&D Inc
We are delighted to announce that our upcoming meetup is scheduled for Tuesday, April 2nd. We invite you to join us at the Capital One Flat Iron office to discuss the latest advancements in Machine Learning and Artificial Intelligence. There will also be an opportunity to network and enjoy some pizza.
Agenda:
5:30PM Doors open
5:30PM - 6:30PM reception, networking
6:30PM - Mikhail Shalaginov, Co-Founder of 2Pi Optics & Research Scientist at MIT - "AI for Meta-Optics"
7:15PM - Konstantin Simonchik, Chief Scientific Officer, Co-founder of ID R&D Inc. - "Two-Level Artifact Detection in Images for Modern Anti-Fraud in KYC"
8:00PM - 9:00PM Further Networking
Open House: AI Strategies and Roadmap: Systems Engineering Approach to AI Development and Deployment
Time: Apr.03 12:00-13:00 EST
Zoom: RSVP
Featuring Speaker:
Dr. David Martinez, a Laboratory Fellow in the Cyber Security and Information Sciences Division at MIT Lincoln Laboratory
As easy as mainstream artificial intelligence tools are to use, AI is far from simple. Engineers have a duty to employ AI responsibly and intelligently to harness all it has to offer to address industry challenges.
In the five-day course AI Strategies and Roadmap: Systems Engineering Approach to AI Development and Deployment, attendees will examine the types of tasks best suited to machines and ones that require the unique skillsets of humans. Additionally, learners will acquire practical experience building machine learning models and algorithms, instilling an appreciation of the complexity of end-to-end AI systems.
Proactive and collaborative AI to complete the bench-to-bedside loop in healthcare
Time: Apr.03 12:00-12:30 EST
Zoom: RSVP
Featuring Speaker:
Yuan Luo, Director, Institute for Artificial Intelligence in Medicine, Northwestern University
Artificial intelligence stands at the forefront of innovation in the rapidly evolving landscape of health care, promising to help lessen the gap between laboratory discoveries and clinical applications. Discover how AI is paving the way for more efficient, precise, and personalized health care, promising a future where technology and human expertise converge to enhance patient care and outcomes. Yuan Luo of the Northwestern University Clinical and Translational Sciences Institute will not only highlight recent achievements but also cast a vision for the future.
CITP Seminar: Molly Crockett: Producing More While Knowing Less: The Epistemic Risks of AI
Time: Apr.09 12:30-13:30 EST
Zoom: RSVP
Featuring Speaker:
Molly Crockett, associate professor in the Department of Psychology and the University Center for Human Values at Princeton University
Scientists envision AI as a way to overcome human cognitive limitations across the research pipeline, improving productivity and objectivity. But proposed AI solutions can themselves exploit our cognitive limitations, making us vulnerable to an illusion of understanding: believing we understand more about the world while actually understanding less. We know less when the proliferation of AI tools creates scientific monocultures, where certain types of methods, questions and viewpoints come to dominate all the rest, making science less innovative and more vulnerable to error. Our analysis provides a framework for advancing discussions of responsible knowledge production in the age of AI.
LLMs and ChatGPT: What They Are and What They Are NotTime: Apr.09 12:00-12:30 EST
Zoom: RSVP
Featuring Speaker:
Walid Saba, senior research scientist at the Institute for Experiential AI
Large language models (LLMs) and chatbots built on top of these models (e.g., ChatGPT) are now all the rage and have triggered the discussion of many serious issues in all aspects of our society from education to business and government. But what is at the heart of these new developments, and what do they represent? Is human-level artificial general intelligence (AGI) finally here? If not, how close do LLMs and systems like ChatGPT are to AGI? Saba will discuss these issues at a very basic level, addressing in the process issues such as bias and toxicity in LLMs, and what is now commonly referred to as “hallucination”. He will briefly discuss how such technologies can be effectively used in a number of applications, sift through the hype and also discuss the limitations of LLMs and what it might take to get us closer to AGI.
AI and accessibility: Opportunities and challenges
Time: Apr.11 12:00-12:30 EST
Zoom: RSVP
Featuring Speaker:
Cynthia Bennett, human-computer interaction researcher, Google
Artificial intelligence could help in efforts to provide digital accessibility for people with disabilities. In this fireside chat, research scientist Cynthia Bennett will define key terms and give some examples of how AI is already helping to make digital spaces easier for people with disabilities to use and participate in. She will also talk about important concerns related to bias and rapid innovation.
Giving AI Some Common Sense
Time: Apr.24 12:00-13:00 EST
Zoom: RSVP
Featuring Speaker:
Ron Brachman, Co-Director and Professor, Computer Science, at Cornell University
With AI’s recent leaps forward, there is immense excitement about its potential and its burgeoning application. But despite demonstrating some amazing capabilities, AI systems continue to make unexpected, unhumanlike blunders, making their behavior worrisomely unpredictable and undermining our trust in them in real-world situations. Surprising gaffes by chatbots and bizarre actions by “self-driving” cars reveal that they just don’t have what we would call common sense.S
o how can we fix this problem? In this talk, Ron Brachman will show how we might take some first steps. He begins by examining what it means to have common sense, exploring the role it plays in cognition. Brachman will illustrate when and how common sense comes into play and will talk about how to start building a common sense capability, based in part of work that has been underway in AI for a number of years, and on a novel bigger-picture approach to the architecture of an artificial cognitive system.
The talk will conclude with some discussion about the criticality of common sense in two key challenges: the intelligibility of intentions and actions in AI systems, and the crucial ability – noticeably lacking in current systems – to take advice and act on it. Brachman will make the case that AI systems without common sense should never be allowed to act autonomously in the world, and that AI still has a rich set of under-addressed research problems that it needs to tackle. The talk will not be technical and will use numerous easily-understood examples from the everyday world.
Stavros Niarchos Foundation Brain Insight Lecture: Memory as Narrative Power
Time: Apr.3 18:30-19:45
Zoom: RSVP
Featuring Speaker:
Christopher Baldassano, PhD, Assistant Professor of Psychology at Columbia University
Jennifer Manly, PhD, Professor of Neuropsychology in the Department of Neurology at Columbia University Irving Medical Center
Julie Parato, PhD, Postdoctoral Scientist at Columbia University Irving Medical Center
Memory ties together the many events we experience over the minutes, years, and decades of our lives. It creates meaning for the narratives that form our identity and the stories we tell each other. Simply put, it allows us to make sense of our world. How does the brain organize memories and shape these stories? What happens to these processes as we age, and how can we maintain a healthy mind across the lifespan? In this event, three experts in memory research will bring perspectives from cellular, cognitive, and clinical approaches to explore the narrative that memory helps us form.
Robust Statistics in High Dimension
Time: Apr.8 16:00-17:00
Zoom: RSVP
Featuring Speaker:
Santosh Vempala, Distinguished Professor of Computer Science at the Georgia Institute of Technology
The goal of robust statistics is to find accurate estimates of statistical parameters despite adversarial corruptions of data (an \eps fraction of data is arbitrarily corrupted by an adversary). For example, the median is a robust estimator of the mean for a Gaussian distribution. While this topic has been studied since at least 1960 (Huber; Tukey), proposed solutions either had error that scaled polynomially with the dimension or had running times that scaled exponentially. In 2016, concurrent papers [DKKLMS; LRV] gave efficient, robust algorithms for mean and covariance estimation of high-dimensional Gaussians (and generalizations). Since then, there has been steady progress on robust estimation and learning algorithms. In this talk, we will discuss two results for classical statistics problems: (1) a polynomial-time algorithm for robustly learning a mixture of k arbitrary Gaussians (with Bakshi, Diakonikolas, Jia, Kane and Kothari, 2022), which relies on a robust partial clustering algorithm and robust tensor decomposition, both of independent interest as algorithmic tools and (2) a polynomial-time algorithm for learning an affine transformation of a unit hypercube, a basic setting of Independent Component Analysis (with Jia and Kothari, 2023). The latter is a problem that information-theoretically cannot be solved from robust estimates of moments (unlike essentially all known solvable robust estimation problems), and our algorithm provides new certificates for affine transformations, immune to adversarial noise; the main idea is a robust version of gradient descent.
CITP Seminar: Juan Carlos Perdomo – The Relative Value of Prediction
Time: Apr.16 12:30-13:30
Zoom: RSVP
Featuring Speaker:
Juan Carlos Perdomo is currently a postdoctoral fellow at Harvard University’s Center for Research on Computation and Society
Algorithmic predictions are increasingly used to inform the allocations of goods and services in the public sphere. In these domains, predictions serve as a means to an end. They provide stakeholders with insights into the likelihood of future events in order to improve decision making quality, and enhance social welfare. However, if maximizing welfare is the question, to what extent is improving prediction the best answer?
In this talk, we discuss various attempts to contextualize the relative value of algorithmic predictions through both theory and practice. The goal of the first part will be to formally understand how the welfare benefits of improving prediction compare to those of expanding access when distributing social goods. In the latter half, an empirical case study will be presented illustrating how these issues play out in the context of a risk prediction system used throughout Wisconsin public schools.