
The Databricks Data + AI Summit 2025, held in San Francisco from June 9 to June 12, 2025, brought together the global data community to explore the future of AI and data. The summit hosted 22000+ in-person attendees, 65000+ global viewers, 700+ sessions & training programs, 350+ customer teams, and 180+ exhibitors from over 150 countries.
Databricks Co-founder & CEO Ali Ghodsi kicked off the summit, reflecting on Databricks' journey and introducing the company’s vision of democratizing data and AI for everyone.
At the 2025 Databricks Summit, the central theme was clear: completing the stack, evolving from a Lakehouse architecture into a full-fledged data-to-AI operating system for the enterprise.
The CEO said, “Every person of every skill level should have equal access to work with data and use AI. With Databricks One, we want to make our experience for nontechnical users as amazing as our experience for technical users. This is our first step in making this true so that everyone across the organization can unlock the full value of their data and drive innovation.”
Open Source: The Foundation of Databricks’ Success
Ghodsi highlighted the foundational importance of open source to Databricks' success. He showcased the massive adoption of key community-driven projects like
- Apache Spark: 2+ billion downloads
- Delta Lake: 1+ billion downloads
- Apache Iceberg: 360+ million downloads
- MLflow: 300+ million downloads
This open ecosystem is central to Databricks’ mission and its ability to support a rapidly expanding global community.
Databricks Summit Announcements 2025
Databricks used the summit as a launchpad for groundbreaking new product announcements that extend the Databricks Data Intelligence Platform’s usability beyond technical teams.
DBX’s key announcements included role-based UIs, low-code pipeline design, Genie natural language querying, and enhanced data lineage & governance controls built to enable analysts and non-technical teams to directly interact with data, generate insights, and build AI applications without deep coding expertise.
Databricks One - A UI Built for Everyone
With Databricks One, DBX launched a simplified user experience tailored for executives, analysts, and business users. It promises role-based interfaces, guided workflows, and no-code with BI/AI dashboards and Genie natural-language query tool.
Lakeflow Designer – No-Code Data Engineering
This no-code application allows users to just drag and drop the ETL pipeline builder. It’s designed to make complex data flows accessible to teams without deep Spark expertise. While the data pipelines used are generally built by data engineering teams, Lakeflow will allow data analysts to do it themselves without sacrificing data governance and system reliability. Lakeflow is all set to be in preview mode later this summer.
Lakebase - OLTP Meets the Lakehouse
It’s a Postgres-compatible OLTP (Online Transaction Processing) database that sits natively within the Lakehouse on open storage, allowing transactional and analytical workloads to run side by side.
This changes the game, bringing together operational data and analytics in one governed, cost-effective platform.
Agent Bricks
With the demonstration of Agent Bricks, Databricks announced a workspace that can build high-quality AI agents.
Ghodsi said, “These are production agents that auto-optimize on your data.”
This means Agent Bricks can address AI agents' related problems, like difficulty in checking their performance, or how to optimize them best. It balances the agent quality against the costs of developing agents and putting them into production.
Lakebridge: Data Warehouse Migration
At the Data + AI Summit, DBX also introduced Lakebridge, an AI-powered data migration tool to move legacy data warehouse systems to Databricks SQL (Data Analytics Software) within the Databricks Lakehouse platform.
Unity Catalog Enhancements
The Unity Catalog saw significant upgrades aimed at reinforcing Databricks’ commitment to governed, unified data access across the enterprise:
- Metric Catalog: Define KPIs and reuse them across all BI and ML tools.
- Fine-grained Access Control: New policies for row- and column-level security, supporting compliance with sensitive data.
- Lineage Across Workloads: Expanded automatic lineage tracking across Python, SQL, Spark, and now agent workflows.
- Partner Ecosystem Integration: Deeper integrations with Tableau, Power BI, Sigma, and others.
Databricks’ Unity Catalog now fully supports both Delta Lake and Iceberg formats, enabling seamless interoperability and governance across systems.
Key Themes and Session Tracks
The Data + AI summit agenda spanned five major themes, each with dedicated tracks and workshops:
- Lakehouse Architecture & Engineering: Real-time pipelines, Delta Lake, Iceberg, and Databricks-native tools.
- AI, LLMs & Machine Learning: Applying Mosaic AI, MLflow, and agentic systems in production.
- Analytics & BI: Self-service dashboards, natural language queries, and modern BI at scale.
- Governance & Trust: Automated policy management, Unity Catalog extensions, and secure AI workflows.
- Leadership & Strategy: Building data culture, AI-ready foundations, and maximizing Lakehouse investments.
Industry Leaders and Real-World Use Cases
The summit featured insights from industry giants like Walmart, Rivian, Mastercard, and JPMorgan Chase, who shared how they’re leveraging Databricks to scale AI, modernize infrastructure, and unlock real-time insights. Visionaries from Microsoft, NVIDIA, and Google, as well as academic leaders from Stanford, MIT, and Berkeley, headlined keynotes and panels.
Summary
The Data + AI Summit 2025 was a landmark event, not just for Databricks but for the entire data and AI ecosystem. It offered a glimpse into the future of unified analytics, AI-driven business, and the power of an open, collaborative data community. We at Syren, Databricks Consulting Partner, experienced the magic firsthand at the summit, and we are ready to unlock the future of Data & AI.
Curious how Syren can help you implement Databricks’ latest innovations?