Attendees
Task Group Members or Designees
Fernando Arias, Meeting Co-Chair, CarbonSmart Strategies
Nicolas Baker, CEQ
Ralph DiNola, New Buildings Institute
Projjal Dutta, New York MTA
Joyce Lee, Meeting Co-Chair, IndigoJLD Green + Health
Jonathan Petry, DOD
Keiva Rodriques, Maryland Aviation Administration
Jane Rohde, JSR Associates
David Wagner, U.S. Department of Veteran Affairs
GSA Attendees
Jeremey Alcorn, PBS
Madison Battle (c), OFHPGB
Michael Bloom, Designated Federal Official, OFHPGB
Christopher Bolinger, PBS
Kelli Canada, OFHPGB
Patrick Dale, OFHPGB
Elliot Doomes, PBS
Katherine Erdman
Denise Funkhouser, PBS
Zach Geller, PBS
Andi Hartranft, PBS
Meredith Holland (c), OFHPGB
Kinga Hydras, OFHPGB
Kevin Kampschroer, GSA Chief Sustainability Officer
Ann Kosmal, OFHPGB
Osvaldo Laboy, PBS
David Leites, OFHPGB
Brad Nies, OFHPGB
Mehul Parekh, OGP
Patty Pelikan, PBS
Ana Rawson, PBS
MacKenzie Robertson, OGP
Christine Stearn, PBS
Bryan Steverson, OFHPGB
Walter Tersch, OAE
Zach Whitman, GSA Chief AI Officer
Speaker and Observers
Mahmoud Abuelroos
Rael Ammon
Mike Armes, GAO
Sylvia Augustus
Theresa Backhus, Building Innovation Hub
Lio Barrera, Cassidy & Associates
Eric Bartlett
John Bauckman
Rebecca Blanks
Avi Blau
Amy Blonder, Perkins&Will
Brian Bothwell, GAO
Patrick Casey, NOAA
Jennifer Cassaday
Julie Chesna, Energy Services Media
Alix Cho
Patricia Donohue, U.S. Army
Rama Dunayevich, Autodesk
Kelsey Fortenberry
Johnny Fortune
Louis Friscoe, The Building People
Devin Gates
Rich Gifaldi, Federal Reserve Board of Governors
Gonzalo Gomez, Silva
Roger Grant
Robert Green
Terry Kolda Grossman
Avneet Gujral, Jacobs
Smita Gupta, NBI
Stephen Hagan, Hagan Technologies
Chuck Hardy
Charles Hargett
Bill Healy, NIST
Zinet Ibrahim
DJ Jackson
Tonya James
Lindsey Johnson
Zach Johnson
Kiana Jones
David Kaneda, IDeAS Consulting
Jeff Mang, JC Mang Consulting
Laura McGill
Eve Lin McNaughton, U.S. Green Building Council
Lawrence Melton, The Building People
Charmaine Mendoza
Mark Morales
Tim Morshead, WRNS Studio
Theresa Moss
Pang Moua
Dave Mulcahy
Alexandra Nappier, Architect of the Capitol
Ed Newman
Jorge Nunez
Robert Okpala, Buro Happold
Mark Palmer
Amy Pastor, EXP
Himesh Patel
Tyrone Pelt, USDA
Venice Rashford, OHS
Jason Rawson
Elijah Rhue, Evolver
Angela Robinson
Prudence Robinson
Henrick Roman
Robin Rudy
Robert Rusbarsky
Deborah Sanders, OCFO(BGP)
Frank Santella, The Building People
Virginia Senf, Autodesk
Amarpeet Sethi, Enviropassiv LLC
Meera Sharma, IRS
Steve Smith
Stephanie Stubbs, National Institute of Building Sciences
Gary Stuckenschmidt,
Alice Sung, Greenbank Associates
Robert Theel, FAIA
Austin Thielmann
Niki Townsend
Derrick Tucker
Timothy Unruh, NAESCO
Daniel Ward
Megan Wenning
Sara Wessels
Griffin Weyforth,
Vernel Williams
Aeri Wittenbourgh, CIV DCSA CDSE
Eric Yang
Thomas Yoon, FRB
Sean Young, NVIDIA
Welcome and Opening Remarks
Michael Bloom went over the rules for the task group.
- GBAC members, who are appointed by the GSA Administrator, actively deliberate, vote, and decide the direction of the task group. Observers are welcome to make public comments at designated public comment times.
- All proceedings are open to the public. Task groups provide advice to the larger Green Building Advisory Committee. Advice given is not binding, which means it is the collective responsibility of members to make sure that advice is realistic and what is written into an advice letter has a good chance of being implemented, and that advice is feasible and practical.
- GBAC members are to add “GBAC” to the end of their Zoom name. Phones are to be muted when not speaking. Comments and questions should be added to the chat and will be addressed, if not in the current meeting, then they will be saved for the next meeting.
Zach Whitman, GSA’s Chief AI Office, provided opening remarks.
- Today we will explore how AI can help us design, construct, operate and renovate federal buildings.
- The federal government manages one of the world’s largest real estate portfolios. Integrating AI into these facilities could significantly improve efficiency, reduce costs, advance sustainability, and enhance occupant well-being. Balancing innovation with ethical and responsible use is essential.
- AI offers powerful tools for smarter decision-making, process optimization, and future needs prediction. However, we must stay vigilant about data privacy, cybersecurity, and the ethical implications of deploying AI technologies in our built environment.
- Implementing AI in alignment with our values and for the public good requires clear guidelines, transparency, and ongoing dialogue with stakeholders, especially the public.
- The first session will clarify the differences between machine learning and Gen AI, helping us understand how these technologies can be applied effectively and responsibly in our buildings. Machine learning is best used to analyze large quantities of data to find patterns and make predictions, and is already used in energy management and predictive maintenance. In contrast, Gen AI is newer and has opened new possibilities in design and simulation, enabling us to create models and scenarios that we previously didn’t find possible.
- The next session will explore how AI is optimizing project delivery. In the design phase, AI helps architects and engineers create optimized building models that maximize energy efficiency and occupant comfort. These tools have been tested and their benefits are clear during construction, where AI-driven project management tools improve scheduling, resource allocation, and risk management, leading to more projects being completed on time and within budget. Imagine a construction site where drones equipped with AI monitor progress in real time, identifying potential issues before they become costly. Consider AI tools that analyze supply chain logistics to minimize environmental impact. These advancements are not distant concepts; they are happening now, and we can leverage these technologies to set new standards in federal project delivery.
- The last session will explore how AI enhances the operations and renovation of federal buildings. Energy management systems powered by AI can adapt to usage patterns, weather forecasts, and energy prices to optimize consumption. Predictive maintenance can anticipate equipment failures before they occur to reduce downtime and extend the lifespan of our assets.
- AI enables control of lighting, heating, and cooling in response to/in anticipation of changing environmental factors, creating safer environments for occupants while reducing our energy footprint. These technologies can save money and support our commitment to environmental sustainability.
- Integrating AI into federal buildings goes beyond technology; it’s about creating smarter, safer, and more sustainable environments that meet the needs of the American people. By leveraging AI, we can reduce our carbon footprint, enhance operational efficiency, and set a standard for sustainability. Additionally, as the federal government adopts these technologies, we can spur innovation in the private sector, create jobs in emerging industries, and position the U.S. as a leader in sustainable development. Moving forward, we must prioritize ethical considerations, ensuring that AI systems are transparent, explainable, and contestable, so stakeholders understand decision-making processes. We must also address data privacy concerns and ensure compliance with all relevant regulations and standards.
- Inclusivity is essential and systems should be designed to serve all users effectively, regardless of their background or abilities. This means considering accessibility from the onset and engaging with diverse communities to understand their needs. As a federal government, we must ensure that AI is safely, equitably, and trustworthily used across all applications. Using AI tools in our practices must adhere to the safety and ethical standards of EO 14110 and 2410, which are critical for protecting our rights and safety, especially around the built environment.
- AI discussions can be intimidating due to the technical jargon and insider knowledge, but Gen AI requires input from subject matter experts outside of the AI field. Collaboration is essential to develop strategies that maximize benefits while minimizing risks.
- Some questions to consider during today’s discussion:
- How do these AI topics discussed today apply to your work?
- How can they serve as a tool to enhance sustainability, not an end to a means but a means to an end?
- What challenges do you foresee and how we might address those collectively?
- We’re at a pivotal moment; the decisions we make now about how we adopt and govern AI in federal buildings will have lasting impacts. We have the chance to innovate and enhance government operations with this new technology, contributing to a more sustainable and equitable future.
Fernando Arias and Joyce Lee, Meeting Co-Chairs, provided a session overview and background on AI.
Session Learning Objectives:
- Understand AI’s Evolution in Buildings: Explore the timeline and impact of AI in real estate and federal buildings
- Recognize AI’s Potential Pitfalls: Identify key concerns in design, renovation, construction, and operations.
- AI in Federal Policy: Learn about the challenges and opportunities of AI in federal buildings, influenced by executive orders.
- Ethical AI Implementation: Understand the importance of responsible AI use aligned with OMB policies.
- ML vs. GenAI in Decarbonization: Differentiate the roles of Machine Learning and Generative AI in federal decarbonization efforts.
Overview of AI’s Development Timeline and Context
- AI, a branch of computer science, originated in the mid-20th century when the term was coined at a Dartmouth conference in 1956. Before AI was introduced to the built environment, there was a history of making buildings smarter. Automation in building management is nothing new, and the federal buildings portfolio–whether leased or owned–has long been suited for advanced building management systems.
- As technologies like sensors, server systems, and Wi-Fi have become more available, the “Internet of Things” (IoT) has emerged, enabling 24/7 monitoring, audiovisual, HVAC controls, security, fire safety, and energy management. With larger databases, interconnectivity is essential. Similarly, in urban design and Smart Cities, success in design can be measured through data, such as linking walkability with restaurant revenue or linking bike lanes with health evidence.
- The launch of ChatGPT has opened up new opportunities for public engagement with AI. Gen AI can support ESG reporting, especially in global real estate. Imagine organizations, both public and private, having access to low-cost, high-speed analysis. With insights on renewable energy, pollution prevention, low-carbon materials, and waste reduction, the potential positive impact on the real estate industry is significant.
- Many at GSA are involved in the AEC (architecture, engineering, and construction) industry. The annual AIA conference in Washington, DC, this past June featured an AI symposium and several panels, just 18 months after the launch of ChatGPT. Opportunities in Building Information Modeling (BIM) development and digital twins are now available, and panelists will also discuss the use of photogrammetry in this meeting.
- Generative AI images are increasingly appearing in areas like ecological design and urban background, allowing for multiple images to be generated at a high speed. As clients and consumers of these images, it is crucial that we understand what we are viewing and what could be deep fakes. The concept of “guardrails” is very important in AI development to ensure responsible use.
- AI is transforming every industry, including ours. While GBAC has a specific focus within a large agency, greening the federal portfolio is within reach, especially with encouragement from the Administration. Charts from Lawrence Berkeley Labs illustrate how various strategies can lead to a 90% reduction in carbon emissions. Thought leaders like Dr. Nourbaskhsh are developing pathways that leverage Machine Learning and Generative AI to predict and create solutions for design and construction.
- The recent Paris 2024 Olympics achieved an impressive goal of reducing carbon emissions by nearly 80% compared to similar past sporting events, focusing on construction and energy. This demonstrates the power of leveraging large data sets. If simulation and modeling made such a significant impact in Paris, AI could further enhance sustainability for the 2028 Olympics in LA.
Potential Pitfalls:
- Energy intensity to power data centers
- Water intensity to cool data center equipment
- Query efficiency
- Grid security
- AI Hallucinations: a phenomenon wherein a large language model (often a Gen AI chatbot or computer vision tool) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
- Privacy and data security
How does AI work on a basic level?
Data collection and preprocessing → Feature engineering → Model training (deep learning) → Inference and prediction of output → Feedback loop
How does the Executive Order on AI impact federal buildings and infrastructure?
- Safe AI Development: Emphasizes secure, trustworthy AI to harness potential while mitigating risks like fraud, discrimination, and national security threats.
- Government-Wide AI Governance: Mandates a coordinated approach across federal agencies to ensure robust, reliable, and standardized AI systems.
- AI in Federal Infrastructure: Highlights AI’s role in securing critical infrastructure, including federal buildings, against cybersecurity threats.
- AI Use Principles: Promotes responsible innovation, supports workers, advances equity, and protects privacy and civil liberties.
- Federal Workforce Training: Focuses on attracting AI talent and ensuring all federal employees receive training on AI’s benefits and risks.
What progress has been made following the Executive Order on AI?
- NIST: Progressing on the AI Risk Management Framework as mandated.
- Department of Commerce: Developing criteria for testing AI in cybersecurity and biosecurity.
- OMB: Released draft guidance for federal agencies on AI use.
- Department of Homeland Security: Addressing AI-related risks to critical infrastructure.
- Federal Agencies: Several, including GSA, are developing AI strategies and implementation plans.
Machine Learning vs. Generative AI
Machine Learning: A subset of AI focused on systems that learn from data, identify patterns, and make decisions with minimal human intervention. ML models are trained on large datasets to improve over time in tasks like predictions, classifications, and optimizations.
Generative AI: A branch of AI that creates new content, such as text, images, or music, based on learned patterns from large datasets. GenAI models, like GPT, are trained to generate new outputs that resemble the data they were trained on, from reports to realistic simulations.
Use Cases for Machine Learning and Federal Buildings
- Energy Management and Optimization
- Predictive Maintenance: Predicts equipment failures, reducing downtime and costs.
- Energy Usage Prediction: Forecasts energy consumption for efficient use.
- Smart Grids: Optimizes energy distribution within federal buildings.
- Smart Building Operations
- Automated Facility Management: Automates tasks like lighting and climate control.
- Asset Tracking Management: Tracks and maintains building assets efficiently.
- Adaptive HVAC Controls: Adjusts systems based on occupancy for comfort and energy savings.
- Sustainability and Emission Reduction
- Energy Efficiency: Identifies opportunities to optimize lighting, HVAC, and water systems.
- Carbon Footprint Reduction: Suggests strategies to lower emissions, like off-peak energy use and renewable sources.
Use Cases for Generative AI and Federal Buildings
- Automated Design and Planning
- Architectural Design Generation: Generates innovative designs aligned with sustainability, space, and budget goals, speeding up planning.
- Space Layout Optimization: Suggests efficient layouts based on space usage patterns, enhancing comfort and productivity.
- Smart Building Operations
- Generative HVAC Control: Optimizes HVAC systems based on real-time occupancy and weather for energy efficiency.
- Dynamic Lighting Systems: Adjusts lighting automatically for time, occupancy, and activities, reducing energy use and improving the experience.
- Content Generation for Building Communication
- Automated Report Writing: Generates reports on performance, maintenance, and energy use, saving time and effort.
- Occupant Communication: Automates personalized messages for maintenance, energy tips, and emergencies.
- Employee Training: Creates virtual training environments for staff to practice tasks and procedures safely.
Sources:
Panel Discussions: Exploring Artificial Intelligence and Federal Buildings
AI in Project Delivery
Stephen Hagan of Hagan Technologies
- GSA’s Administrator, Robin Carnehan, has led GSA’s impressive efforts in implementing AI, including over 150 AI pilot projects across various use cases. GSA has a significant impact on communities and the environment across the U.S., given that federal buildings account for 1-2% of total U.S. energy consumption and contribute around 40-45 million metric tons of emissions. There are a variety of data sources related to this impact, making AI essential for analyzing the whole federal building portfolio to extract actionable insights to improve sustainability and efficiency.
- Portfolio management is crucial in assessing the entire inventory of facilities, evaluating their condition and performance as a business model to determine what projects need to be initiated, into project delivery whether for repairs, alterations, or new constructions. From initial approval to handover for facility management and operations can typically take 8 years. This process is lengthy but essential for ensuring quality and consistency.
- The P100 Facility Standards by GSA’s Public Building Service outlines criteria that must be adhered to in every design and construction contract. P100 establishes quality control and uniformity across projects and is supported by a comprehensive body of knowledge and training. Digital projects can align with the P100 facility standards, enabling buildings to self-check or be checked for compliance, highlighting the potential of using digital models to ensure adherence to these standards. A significant challenge is the construction industry is among the least digitalized sectors, often relying on expensive tools while still operating in analog ways. Manual handoffs remain prevalent, which hinders efficiency. Embracing digital processes could enable extensive visualization, analysis, and data-driven decision-making.
- Mr. Hagan encourages everyone to explore their Flipboard, which aggregates various articles and insights related to AI and architecture, including how AI can enhance digital twins from 3D to 4D. Mr. Hagan emphasizes the collaborative nature of this resource, inviting contributions from all participants.
Sean Young of NVIDIA
- AI’s impact on work processes can be compared to reviving a dried-out plant with water. Just as a neglected plant can thrive again, AI will enhance productivity by addressing redundant and tedious manual tasks. Instead of replacing jobs, AI will augment and improve how we work.
- It is important to understand and adopt AI skills to effectively hire and implement AI solutions. AI will evolve from existing software, similar to past advancements from 2D to 3D to BIM, and will become integrated into tools like AI-enhanced BIM. Additionally, organizations will need to develop their own AI capabilities to improve services and workflows. For example, in the future, design may shift from mouse-based inputs to typing specifications like square footage, energy, and utility requirements. AI will help optimize space usage, similar to how computational geometry determines shapes.
- While innovation in AI is emerging, we’re still in the early stages. In construction, AI is expected to significantly impact project management by enhancing scheduling, cost control, and safety. There is a lot of data available from construction sites, which can be collected through methods like drone surveys and photogrammetry, with AI being integrated into these processes to improve efficiency and oversight.
- AI can help solve the challenges of managing large datasets, which can be overwhelming for humans to analyze daily. AI can efficiently handle tasks such as progression analysis, change detection, predictive maintenance, clash detection, and ongoing safety monitoring on construction sites. On the operations side, Nvidia has a platform called Metropolis that utilizes computer vision and multiple cameras to enhance safety, security, and operational efficiency. For example, it can help manage airport traffic by opening lanes based on real-time crowd analysis or identify potential security threats by analyzing movements and facial expressions.
Examples of AI Applications in Design and Operations:
- Populous, the designer of stadiums, uses AI to predict sightlines and ergonomics for various events, as well as potential revenue by analyzing the placement of concessions. This AI model evolves from the design phase into a digital twin that enhances the customer experience during operations.
- Flood prediction by Stantec utilizes AI to forecast potential flooding based on weather and historical data. This system not only predicts flood events but also evaluates evacuation routes and potential damage, offering a more efficient approach to disaster preparedness through AI.
- It is critical to leverage the decades of data to generate actionable intelligence, which is central to what Nvidia does and what AI does. Nvidia provides developer tools, enabling partners like Autodesk to create their own AI applications. These tools support data science teams in developing tailored AI solutions that can be deployed on Nvidia’s computational infrastructure. The key message is that effectively utilizing existing data can lead to significant improvements in processes and operations.
- Creating your own AI model requires a substantial amount of data. Training involves a reinforcement process where the AI is rewarded for correct recognition and penalized for mistakes. A more efficient approach is to start with an existing model and fine-tune it using a smaller dataset, such as thousands of images, instead of starting from scratch. This method allows for the customization of the model to generate specific outputs, like designing a certain type of building. A lot of benefits can come from Retrieval Augmented Generation (RAG), which allows users to leverage existing models without the need for building or fine-tuning their own. RAG connects a model to a database, enabling real-time data lookups and reducing the risk of inaccuracies or “hallucinations.” Nvidia’s Chat RTX allows users to create a RAG database from files on their desktop.
- To deploy AI effectively, it must be trained, preferably in synthetic environments, where scenarios like recognizing aggressive behavior or car accidents can be simulated. This training approach allows the AI to learn to identify such events accurately. Additionally, the infrastructure needed for edge computing is crucial for deploying these solutions.
Virginia Senf of Autodesk
- Autodesk is a design and make software company with a 42-year history of leading digital transformations in the AECO (Architecture, Engineering, Construction, and Operations) sector. Autodesk has been involved in the transition from hand drafting to electronic design with CAD and AutoCAD in the 1980s, the introduction of Building Information Modeling (BIM) about 22 years ago, and moving the industry from 2D to 3D modeling. Most recently, Autodesk was chosen as the design and make partner for the LA 2028 Olympics.
- BIM has significantly improved the accuracy of designing and maintaining digital twins of built assets and it has also resulted in a massive amount of data. In the AECO industry, this issue is particularly pronounced, with estimates suggesting that 95% of generated data goes unused, costing nearly $2 trillion annually in poor decision-making. Despite the abundance of data, it remains largely untapped and presents a frustrating challenge. As a project progresses, the data volume increases. However, every time there’s a handover of information between teams or phases, data loss occurs, leading to inefficiencies and risks that ultimately affect budgets and timelines. Autodesk’s vision aims to address this issue by uniting disconnected stakeholders and supporting a fully integrated digital asset lifecycle, from design through to operations.
- Through an outcome-based approach, AI can address the challenge of connecting stakeholders and maximizing value from underutilized data. This new paradigm centers project outcomes in the design process, shifting away from the traditional model of designing first and analyzing later. Instead, it starts with data and uses AI to generate optimal options at the beginning of the project lifecycle. This allows organizations, like GSA, to evaluate trade-offs and make informed decisions early on, based on key project constraints. By defining desired outcomes upfront—such as the maximum cost for a retrofit—organizations can enhance productivity throughout the asset lifecycle.
- By defining constraints—such as preferred construction methods, sustainability or environmental considerations—AI can explore a wider range of options and help select the best solution for each project, ultimately enabling more thorough exploration than traditional methods allow. The MacLeamy Curve, developed by Patrick MacLeamy, former CEO of HOK, can illustrate the value of starting early in the design process. This framework highlights that the cost of design changes increases as a project progresses, while the ability to influence costs decreases. By investing more effort in the early design stages, teams can significantly reduce costs and errors during construction and operations. While the MacLeamy Curve primarily focuses on cost, it also applies to other factors, such as sustainability outcomes. They then reference the sawtooth curve of data loss, pointing out that projects often begin with the least amount of information. However, they prompt the audience to reconsider this assumption, suggesting there may be more data available at the start of a project than typically recognized.
- Current AI applications allow organizations to benefit from AI even without having their internal historical data sets ready for training. For example, AI can help address data gaps in the early stages of a project using an embodied carbon extension Autodesk developed in collaboration with C-scale. This approach means that users can benefit from the insights provided by the model without needing to provide their own historical project data. This enables accurate, real-time analysis for early-stage designs and allows for continuous optimization as more project inputs are processed.
- It is important to make data more usable, structured, and consistent; this is a common challenge faced by many organizations. Leveraging existing AI models is a good starting point for maximizing data value. The promise of AI lies in its ability to extract value from underutilized data that organizations have generated but often overlook or mishandle. By leveraging existing data on built assets and past projects, AI can enhance design, delivery, and maintenance practices. Autodesk, with its industry perspective, frequently receives requests for best practices, which can be informed by analyzing this previously untapped data.
Data Quality:
- Once data is structured and connected, AI can facilitate cross-project analysis, providing valuable insights into successful projects, sustainability in retrofits, and lessons from projects that faced budget and schedule challenges. However, while the potential of AI is promising, it’s crucial to recognize that the effectiveness of AI relies not only on the amount of data but also on its accuracy. AI’s accuracy heavily relies on the quality of that data. In our industry, many quality assurance processes are still manual and underdeveloped. For example, one customer analyzed models across 300 projects and found that certain assets had up to 10 different naming conventions. Using the example of a hospital, a nursing station might be labeled inconsistently, such as “nurses station” in one place and “nursing station” elsewhere.
- This inconsistency highlights a key opportunity for organizations like GSA, which has access to one of the largest asset and data portfolios, thereby the challenge of obtaining valuable training data isn’t an issue. Additionally, GSA’s investments in establishing clear BIM standards provide a strong foundation for structuring data and ensuring its quality, making it easier to leverage AI effectively.
- It’s important to discuss the risks of training AI on poor-quality data, especially in our industry. The consequences can be much more severe, such as designing a foundation incorrectly or retrofitting a building that fails to meet modern codes. Once data is properly structured, it can be harnessed to support AI applications. At Autodesk, we see AI as a co-creation partner, enhancing human efforts rather than replacing them. AI will help automate repetitive tasks, boost human creativity, and analyze data for valuable insights. Real-world examples of what outcome-based design looks like:
- Autodesk is supporting a project in Oakland, California aimed at addressing the critical shortage of affordable housing in the Bay Area. In the project’s planning phase, specific desired outcomes were identified, such as cost, embodied carbon, and diversity of room types, represented on a spider web diagram. By leveraging AI, the project team explored various configurations and assessed the trade-offs associated with these outcomes. By combining building layouts and configurations, they identified the optimal options that aligned with the project’s specific goals. This showcases how AI can enhance our creativity, ultimately leading to the design and construction of beautiful, affordable homes in a quick and sustainable manner.
- Autodesk designed a hospital in Northern Europe, where 20 to 30 different designs were initially identified. Performing post-processing analysis on architectural designs using AI, the number of patterns was reduced to just 5 or 6 patterns. This simplification made it much easier and faster to automate fabrication and establish repeatable processes, resulting in a hospital that was quicker, cheaper, and more sustainable to build.
Potential Pitfalls of AI in AEC:
- Specifying products or materials that are no longer manufactured
- Recommending layouts that don’t match today’s codes and regulations
- Over or underestimating the amount of material needed for a specific structural component (concrete, steel, etc.)
- Suggesting clash-prone MEP layouts based on historical project data
Main takeaways:
- Making decisions early in the project life cycle is crucial and enhancing generic models with your unique datasets can significantly boost AI’s effectiveness.
- AI is only as good as the data it is trained on
- If AI is trained on poor quality data, it will generate faulty solutions
- AI still needs to be checked
- For GSA, investing in and validating your BIM standards now will create a strong foundation of high-quality data, setting you up for success as you move forward.
Questions and Answers - AI in Project Delivery Panel - Stephen Hagan of Hagan Technologies, Sean Young of NVIDIA, and Virginia Senf of Autodesk
- How can we make sure that equity and inclusivity are addressed in high-tech scenarios using AI?
- Sean Young: I want to emphasize Virginia’s point about “garbage in, garbage out.” AI doesn’t inherently know anything; it must be trained on quality data. If the data contains misinformation, the AI will accept that as truth. A crucial practice here is data science, which can be integrated into your BIM methodology. This provides a framework for standardizing your data. Additionally, data science can be applied to various data types, such as Excel or text data. Nvidia offers tools for data science, but before implementing AI, it’s essential to ensure your data is prepared, correctly labeled, and suitable for training. This discipline, known as data science, is vital, and Nvidia is here to assist you with that process.
- Stephen Hagan: I’d like to highlight the importance of prompt engineering. When considering outcomes, it’s essential to clearly define the problem you’re trying to solve. The way you frame your questions to the data will significantly influence the answers you receive and what you can do with that information.
- Sean Young: As the dialogue develops with the results from AI, we can incorporate guardrails to ensure accuracy during the inference process. These guardrails help prevent misinformation from affecting the responses. Additionally, rather than allowing completely free-form input, we might implement features like drag-and-drop tools to guide users in formulating their prompts, making the process more tailored. Moreover, techniques like RAG can enhance the reliability of responses by ensuring they’re verified against real data. This approach will help maintain the integrity of the information provided.
- How can we overlay AI into affordable housing projects?
- Virginia Senf: Cost is certainly a key outcome that our customers prioritize. There’s also a question about how AI can factor in other outcomes, such as the impact on existing residents or the environment. The critical step is understanding which outcomes are most important. For instance, you could invest $100 million in creating a perfect building that has no negative effects on the neighborhood, but if your budget is only $10 million, you’ll need to make some trade-offs. As we develop our AI offerings, we’re focusing on this ability to track multiple outcomes—think of it as a spider web with several factors in play. Ultimately, it’s up to you to decide, with AI’s help, which design aligns best with your unique use case by analyzing thousands of design options.
- Virginia, can you elaborate on the guardrails placed around affordable housing projects using AI?
- Virginia Senf: Joyce mentioned Project Phoenix earlier, which is fascinating because it’s an AI company developing wall materials from an organic mushroom compound, making it environmentally friendly. The benefit of outcome-based design is that it allows you to set specific goals at the project’s outset and then verify those outcomes during the operations phase. This partnership across the full project life cycle is crucial. After construction, it’s important to assess whether the actual costs and sustainability requirements were met. Feeding real-world results back into the model enhances the quality of future inputs. As we discuss monitoring built assets, all collected data can be utilized to improve future projects. This also applies to building products—manufacturers often promote their products with claims of durability and warranties. The key question is whether these claims hold true in practice. Tracking costs and performance over the entire asset life cycle is vital for making informed decisions and improving future designs.
- Are there any particular programs or initiatives that each of your companies is embarking on so that we can ensure gender equity and inclusivity are included in data quality?
- Virginia Senf: At Autodesk, one of our core values is diversity and inclusivity, and this extends to our approach to AI. The industry is still navigating how to effectively incorporate diverse data and perspectives, and I want to be transparent about our ongoing efforts in this area. There’s significant potential as we deploy initial tools and models to monitor their real-world performance and suggestions. Gathering feedback from these implementations will be crucial in refining and enhancing the quality of our tools’ recommendations. This iterative process will help ensure that our AI solutions are not only effective but also inclusive and representative of diverse needs.
- Sean Young: In addition to the technology and data science approach we’ve discussed, ensuring diversity and inclusion among our people is crucial. Nvidia is one of the most diverse companies I’ve experienced, and we’re very proactive in our hiring practices to promote diversity and inclusion. This focus not only enhances the user experience but also impacts the data we generate and the overall results of the company. A diverse team brings different perspectives and ideas, which ultimately enriches our AI solutions and their effectiveness.
- How do we build responsibly with the current portfolio information that you all now know? Are there opportunities to begin to think about a small pilot project or a training model within this portfolio that you could foresee as a first start so that this particular iteration can begin to inform future projects?
- Stephen Hagan: We have a range of projects at different stages, and it’s essential to focus on those currently in the formulation stage. Each of the 11 regions of the Public Building Service has individuals eager to pilot new initiatives. Engaging these advocates and launching pilot projects will be key. The most important thing is to take action, experiment, and see what results emerge.
- Sean Young: Going back to my earlier example, everyone should familiarize themselves with P 100. This presents a fantastic opportunity to experiment with AI. You can use RAG to interact with P 100 and you can get started on this yourself in just five minutes—simply download ChatRTX.
- Virginia, based on what you learn from the housing project, is there anything relating to let’s say office buildings, courthouses, or other regularly seen public building types that you can translate to inform future projects?
- Virginia Senf: We’re seeing that while each building type—such as hospitals, courthouses, and office buildings—has its unique characteristics, there are repeatable patterns within those types. For instance, the analysis conducted for hospitals can also be applied to office buildings to identify similar layouts and room patterns, making it easier to standardize manufacturing and construction processes. Additionally, a valuable tool for our customers is the ability to access sources for suggestions or summaries, allowing them to verify the accuracy of information. This feature enables users to confirm where data originates, whether from a model or an RFP, ensuring both efficiency and reliability in the decision-making process. Overall, there’s significant potential to apply these insights across various building types.
- For Autodesk design options, is cost also an outcome criteria? (Can you assess the cost for each design suggestion)?
- Virginia Senf: Yes, cost is one of the primary outcomes many of our customers consider.
- Can Autodesk assess “non-conventional” outcomes, such as circularity of the building, contribution to biodiversity protection, indoor air quality…?
- Virginia Senf: Not today, but these are certainly outcomes our team is considering adding to our platform. A lot of this ties to ensuring we can find strong partners like C-Scale to partner with on data models.
- Given the large portfolios of existing buildings in all the agencies that were shared, I would like to hear the panelist’s perspectives on using AI to support existing building retrofits, especially related to net zero energy and decarbonization.
- Virginia Senf: The concept of a roadmap is essential in this context. Achieving net zero won’t happen overnight; it’s a gradual process. All panelists have emphasized that AI should augment our existing processes rather than replace them. The opportunity lies in examining the various teams and disciplines within the DSA to identify measurable, achievable improvements that can be made using AI. This approach will allow you to progressively work towards the net zero goal, making the idea of a structured roadmap critical for guiding that journey.
- Stephen Hagan: I’d like to emphasize that a roadmap should be closely linked to a strategic plan, outlining both the strategy and the steps needed to achieve it. One effective approach we’ve taken with the BIM program at GSA and now with AI is implementing a rewards program to recognize progress. We focused on identifying projects that couldn’t have been accomplished without AI, celebrating those successes. This not only highlights achievements but also helps build a body of knowledge around successful initiatives, providing a foundation to continue advancing our efforts.
- Sean Young: In terms of technology, Virginia showcased a great demo using the C-scale model to predict energy outcomes, highlighting how this type of technology is already available for trial. For instance, there’s a 30-day trial for Autodesk software, allowing you to explore these tools right away. Similarly, for retrofitting projects, starting with scanning and capturing the as-built conditions is crucial. AI can assist in this process through tools like Nerve and Gaussian Spacing. If anyone is interested, I’d be happy to provide more information on these technologies and how they can enhance your projects.
- Do you see any of your clients or companies that you work with have already applied AI to facilities capital planning? If not, what may be the difficulty? Is it a reason for imperfect BIM modeling or not enough of the data portfolio to do capital planning with?
- Sean Young: Nvidia has many U.S. public sector customers utilizing our AI solutions for building operations, particularly with our Metropolis technology. While I don’t have specific examples handy, Rob and Rudy from Nvidia may be able to provide more details. Looking ahead, a key frontier is digital twins, which Virginia also mentioned. This concept involves integrating all relevant data to create a digital representation not just of a building, but potentially of entire cities, like Washington, D.C. This would enable better planning for infrastructure and enhance user experiences, particularly in relation to transportation costs and logistics. A digital twin can help simulate and continuously refine a space, allowing for ongoing monitoring and improvement, making it an exciting area for future development.
- Virginia Senf: I’ll second that. The tool I demonstrated earlier, which facilitates analysis, was originally developed for capital planners who often collaborate with design partners and architects. We’re now expanding access to this tool, making it more widely available for use in the capital planning process. It’s already being utilized effectively in that context today.
- Stephen Hagan: I Sean’s point is spot on. Imagine having a digital twin of an entire portfolio for a company, organization, or agency. This would allow you to analyze the performance of multiple projects and buildings collectively. By visualizing and assessing the entire portfolio, you can make informed decisions about maintenance, upgrades, and resource allocation, ultimately enhancing the efficiency and effectiveness of building operations across the board.
- Federal Government building data cannot currently be shared on these commercial AI platforms. How can we encourage secure collaboration while also allowing for open, cost effective access for the Government to these platforms?
- Sean Young: There are no restrictions when it comes to running AI; you can perform AI fine-tuning and inference on your Gov Cloud or on-premises. Nvidia here to assist you in understanding how to implement these solutions effectively.
- Architecture and Engineering usually require different tools and multiple models often need to be created due to this, has there been a discussion around this to optimize the integration of these models?
- Virginia Senf: Yes, this is a great call out. At Autodesk, we have been working to build out a data model that connects data across these different models so that AI can be trained on the full picture.
- How can AI differentiate between truth or disinformation or misinformation?
- Regarding optimizing “human experience,” how can AI display outcomes such as displacement or any negative impacts of gentrification of original neighborhood residents?
- Can AI help GSA find all the BIM, CMMS, and other related software that is paid for but not in use? And how much will that cost?
- Are your staff diverse enough in regards to race, gender and ethnicity to check for poor data or misinformation? If not, what are your goals to create guard rails as you create/cultivate AI data?
- As for digital twins, how much does GSA leverage the energy modeling in this process?
AI in Facilities
Questions and Answers - AI in Facilities Panel - Keiva Rodriques of Maryland Aviation Administration, Lawrence Melton of The Building People, and Tim Unruh of the NAESCO
- What are the most significant challenges to be encountered when integrating AI into building operations or renovation projects and how can they be addressed?
- Lawrence Melton: It became clear that our discussions today have highlighted the complexities of adopting technology, particularly AI. Adoption is often the most challenging aspect, as we’ve seen across the industry. The conversations have revolved around three main areas: the problem statement, the technology itself, and facilities operation. This aligns with what I refer to as the “three-legged stool” of complexity in technological change. A crucial takeaway is that organizational alignment is essential—whether it’s between CIOs, construction teams, facility organizations, or program offices. In my experience working with various agencies, the biggest obstacle remains achieving alignment across these organizations. This includes not only policy and systems but also addressing data privacy concerns, which involve both the building’s data and the people within it. Despite years of progress in AI and machine learning, the challenges often come down to the people involved. Getting everyone on the same page regarding goals—such as ROI and energy savings—is critical. We have found that a systematic approach is necessary to engage all stakeholders and champions, considering the entire lifecycle of the organization. Ultimately, achieving organizational alignment is still the foremost challenge we face in adopting new technologies.
- Keiva Rodriques: That response was right on target. For my organization, there is a need for a culture shift, especially within state agencies. Having worked both as a consultant and now in executive leadership, I see firsthand how slowly the government adapts to technological changes, whether through policy implementation or staff training. Creating an environment where staff feels comfortable engaging with data is crucial. With multiple architecture, engineering, and design firms using different AI solutions, it’s essential to foster understanding and support among the team about how these tools can be impactful. Ultimately, it’s about bridging that cultural gap and ensuring everyone is on board with the changes. Supporting clients through this transition is vital to achieving successful outcomes. Your insights highlight the importance of patience and continuous effort in cultivating a more data-driven culture.
- Tim Unruh: Adoption is a significant issue in the ESCO (Energy Services Company) sector and construction in general. The pressure of tight schedules often leads to rushed decisions, making it challenging to fully integrate new technologies or practices into projects. This urgency can hinder the careful planning and training needed to effectively implement AI and other innovative solutions. It’s crucial to find a balance between meeting deadlines and ensuring that all stakeholders are aligned and prepared for the changes. Addressing these challenges requires a focus on fostering a culture of adaptability and open communication within teams, allowing for smoother transitions and better outcomes.
- Lawrence Melton: You bring up an important point about the significance of cross-agency collaboration in driving the adoption of new technologies, particularly in the public sector. The experience from the early stages of the Smart Buildings program at GSA illustrates how critical partnerships, like the one with the Department of Energy, can help overcome barriers and achieve shared goals. Your reference to Rogers’ Adoption Curve is particularly relevant. It provides a framework for understanding how organizations transition through different stages of adoption—from early adopters to the majority. Recognizing where stakeholders stand in this curve can help tailor strategies for engagement and communication, ensuring that everyone is aligned and motivated to embrace change. Encouraging stakeholders to identify and leverage influential partners across agencies can create a more supportive environment for innovation. It’s a reminder that collaboration and shared vision are often key ingredients in successfully implementing transformative initiatives. If you find that link, it would certainly benefit others looking to understand this process better.
- Tim, from the perspective of ESCOs - what are the most promising future applications of AI in enhancing not just energy efficiency in federal buildings, but other processes here?
- Tim Unruh: AI can play a major role in enhancing project performance, particularly in analyzing and utilizing project manuals more effectively. By integrating AI models, site personnel can access the specific information they need quickly, streamlining processes and improving efficiency on-site. Additionally, emphasis on safety is crucial. AI can predict potential safety issues by analyzing historical data and identifying risk patterns, ultimately leading to safer work environments. When it comes to energy consumption and equipment performance, using AI to aggregate and analyze this data can provide valuable insights. This proactive approach allows for early identification of potential performance issues, enabling teams to address them before they escalate. By training AI models on expected challenges, organizations can refine their strategies to maximize energy savings and overall project performance. Incorporating these elements not only enhances operational efficiency but also contributes to achieving sustainability goals. It’s a powerful reminder of how AI can transform project management beyond design, focusing on ongoing performance and efficiency.
- Keiva, how does AI influence your approach to life cycle cost analysis and material selection and renovation projects? Can you discuss a project where AI significantly impacted decision-making?
- Keiva Rodriques: The Maryland Aviation Administration (MAA) operates under the Maryland Department of Transportation, managing both airports and regulating 34 regional airports. MAA aims to be proactive in project design, focusing on asset data collection to maintain infrastructure, particularly pavement quality, which is critical for safe aircraft operations. To minimize disruptions to airline partners and passengers, MAA prioritizes comprehensive data-driven decision-making. They view AI as a key business strategy tool, or “decision intelligence,” to enhance preventative maintenance. This includes refining data collection on vertical infrastructure, like the central utility plant, to address maintenance needs before issues escalate. Additionally, MAA is exploring the use of drones for airfield inspections, which can capture high-quality images to create digital records of pavement deterioration. This data will aid in predicting necessary rehabilitation and reconstruction, ultimately improving the grant application process for funding those projects ahead of construction needs.
- Larry, in what specific ways has AI-driven or could drive predictive maintenance to improve operational efficiency on the buildings you manage?
- Lawrence Melton: The Lawrence Berkeley National Lab study, along with a GSA-led study from 2008-2009, has established a foundational business case for smart buildings in the public sector. Key findings indicated that for every 86 cents invested per square foot, there are significant returns: 9 cents in energy savings and 43 cents in labor or operating cost savings. While achieving energy savings is relatively straightforward, the greater challenge lies in realizing labor savings through improved operational efficiency. To achieve this, the facilities management delivery model must evolve. GSA is leading efforts to change how maintenance contracts are structured, focusing on accountability and integrating AI and building automation technologies. Although the industry has not fully adopted generative AI, there’s potential for its application. The ongoing transformation emphasizes the need for a shift in how services are procured and scoped, moving away from traditional methods. This evolution is marked by varying rates of adoption across agencies, with many still in the early stages of implementation.
- Tim, considering the federal government’s focus on sustainability, how can AI help ESCOs be cost-effective?
- Tim Unruh: The government plays a crucial role in providing valuable data, such as the federal energy management program’s vast database of opportunities. This information can be analyzed using AI to improve building schedules and operations, helping to adapt to changing usage patterns. Additionally, energy data can uncover insights about facility performance. By combining measurements of individual equipment with analysis of utility bills, we can identify discrepancies and opportunities for savings that may not be apparent from equipment performance alone. This dual approach allows for proactive problem-solving. Feeding this data into an AI model can enable predictions that address issues before they escalate, enhancing overall energy efficiency and operational effectiveness.
- Federal Government building data cannot currently be shared on these commercial AI platforms. ESCOs are already operating with some of that information on their side. It seems to me like ESCOs are the front guard of ensuring that the federal government would be assisted in integrating AI gradually because they already have the data security in place. Any reactions to data security and the concept around how ESCOs are adept at being able to start to manage that?
- Tim Unruh: Data security is going to be a big issue when you’re dealing with federal agencies. Any data you feed into a model could become public, so I think you have a challenge that you may have to have private models that you feed this with in order to maintain that data security.
- Keiva Rodriques: In the airport environment, collaboration among various entities is crucial for enhancing passenger experiences. Airports manage contracts with airlines, restaurants, and other services, each governed by its own rules and regulations, particularly regarding security and data management under DHS and TSA oversight. To create a more seamless customer experience, it’s essential for stakeholders to form a working group to navigate and harmonize these diverse regulations. This collaborative approach will help identify opportunities for synergy while respecting the necessary data security protocols of each airline and partner.
- In the airport environment today, before AI, how do you collaborate among the different entities to ensure data security?
- Keiva Rodriques: Coordination is essential in airport operations, especially when it comes to security and technology integration. The airport’s operations and security teams maintain a strong relationship with DHS, meeting weekly to stay aligned on regulatory changes and necessary procedural updates. As technological advancements, like biometric systems, are explored, it’s crucial to involve all relevant stakeholders, including the airport technology and innovation group. This ensures that everyone’s needs and the necessary infrastructure are addressed. Open dialogue about rules and regulations allows for responsible discussions and effective collaboration, facilitating the integration of new technologies to enhance passenger experiences while maintaining security standards.
- How do you ensure or could ensure that AI systems used in facility operations align with ethical standards and data privacy regulations? What kind of best practices can you share that are being deployed today, but could be optimized and enhanced with AI?
- Lawrence Melton: The integration of AI and smart building sensors raises important concerns about data quality, ethics, and privacy. It’s essential to conduct human quality checks on data to ensure it is accurate and free of personally identifiable information. Given the current low utilization rates of buildings—often around 20%—there’s a pressing need to identify operational efficiencies and cost savings. Transparent communication is crucial, especially when engaging stakeholders like HR and unions. Addressing these groups directly about the goals of data collection and its potential for reinvestment can foster trust and collaboration. It’s vital to have open discussions about sensitive topics, as this can lead to better decision-making and resource allocation. Ultimately, building owners and operators have a responsibility to maintain data integrity and ensure that employees feel their privacy is protected. This transparency and ethical approach can build confidence and support for new technologies that aim to improve workplace efficiency.
- Keiva Rodriques: We are currently developing an artificial intelligence policy, which is under review by CIOs and IT directors across various transportation sectors, including highways, ports, transit, and airports. Once finalized, the policy will guide ethical and responsible engagement with generative AI and machine learning. Exploring these technologies is crucial, and it’s important to do so while ensuring security and confidentiality. The policy will require staff to verify the accuracy of AI-generated information before publication, relying on internal subject matter experts for proper analysis. The emphasis is on actively engaging with industry and learning from private sector practices, while being cautious not to compromise sensitive information. The goal is to create policies that align with private sector standards and effectively enhance operations in the public sector.
- How do you all see supporting AI adoption in your organization’s helping with retaining and recruiting fresh talent into your organizations?
- Keiva Rodriques: Creating opportunities for skill development in AI is essential, and the organization is actively supporting staff education, including an employee pursuing an MBA focused on AI. The evolving educational landscape is aligned with the organization’s goal to build expertise in AI and its business implications. The airport is also rolling out a sustainability plan that includes the development of a microgrid. This initiative aims to enhance data collection for forecasting energy supply and demand, manage complex energy structures, and incorporate renewable power generation. The airport is converting many buses and shuttles to electric vehicles, and the microgrid will support this transition by automating connections during utility outages. Additionally, the organization is exploring the integration of AI tools with Building Information Modeling (BIM) and other design software to streamline design processes. This could lead to faster and more cost-effective designs, especially for recurring projects like runways, taxiways, and passenger boarding bridges, ultimately saving public funds.
- Lawrence Melton: Investing in comprehensive training and development for the workforce is crucial, especially in the context of automating facilities management services. The current labor market for trades is tough, with many workers lacking the necessary technology education. It’s not enough to offer basic classes; instead, a robust, ongoing training program is needed to keep pace with evolving technology. Recognizing this gap, the organization is developing an educational platform aimed at equipping union personnel and other workers with certifications in building automation and other relevant skills. The goal is to create a pipeline of qualified candidates, transforming HVAC technicians into junior developers who can effectively manage smart building systems. While there’s excitement around advanced technologies like AI, it’s essential to prioritize investing in people. Past focus on technology without adequate workforce training has led to a disconnect. As AI and automation become integral to operations, it’s imperative to educate workers on how to use these tools effectively, turning potential job threats into opportunities for enhanced performance and efficiency. This approach is seen as both necessary and overdue for the future of smart building management.
- How do you see AI revolving in the next 5-10 years, with your respective fields of facility, operations, and renovation? What trends would you be paying attention to?
- Keiva Rodriques: One emerging trend at the airport involves using drones for data collection, particularly for the pavement management program. This technology allows for more accurate inspections without disrupting air traffic, as traditional methods often required personnel to drive the runway and manually inspect conditions. Instead of viewing drones as a threat to jobs, the focus is on retooling and retraining staff to operate drones, offering opportunities for employees to become certified to fly commercially. The airport is committed to investing in its workforce through programs like the Advanced Leadership Program, which supports employees ranging from new hires to long-term staff. This investment in people is seen as essential for organizational success. By embracing new technologies, the airport aims to enhance operational efficiency and sustainability, particularly in data collection for energy usage and facility management. The overarching goal is to remain open and curious about technological advancements to continually improve operations.
- Lawrence Melton: To summarize, there’s a strong call for a shift in how services are approached within facilities management. The emphasis is on moving away from traditional operations and maintenance terminology towards “integrated facilities management.” This change would signal a transformation in service requirements, encouraging organizations to adopt more advanced, technology-driven solutions. The goal is to drive demand for a workforce equipped with the necessary skills to meet these new requirements, thereby prompting broader investments in training and development for trades. If organizations began specifying integrated management in their requests for proposals (RFPs), it could spark a significant positive change in the industry and help bridge the skills gap that’s currently a challenge.
- If we have so much data we don’t have time to use effectively now, how am I going to take time to validate data and then train a model. What is the return on investment of my efforts? Do I have to hire a new person to QA all of the building data and train the AI? How long does it take to train the AI?
- Keiva Rodriques: Starting with the end goal in mind is crucial for effective data collection and management. By defining clear objectives, organizations can ensure that the data they gather is relevant and actionable. This involves establishing key performance indicators (KPIs) that align with strategic goals, which helps in tracking progress and making informed decisions. It’s also essential to communicate these requirements clearly in contracts with consultants and contractors, specifying what data is needed, when, and how it should be reported. This systematic approach prevents the common pitfall of being “data rich and information poor,” where too much irrelevant data overwhelms the ability to derive meaningful insights. Ultimately, having a focused strategy not only streamlines data collection but also enhances the organization’s capacity to leverage that data for operational improvements and strategic initiatives. This structured mindset fosters a culture of accountability and continuous improvement, aligning technology efforts with broader organizational objectives.
- Lawrence Melton: Start with the end in mind. Data should always serve a purpose, and having a clear end goal is essential for effective utilization. Reverse engineering from that end state not only clarifies what data is necessary but also streamlines the process of training personnel. When it comes to training operations and maintenance staff, the timeframe can indeed vary significantly based on individual skill levels, the complexity of the technology, and the specific roles involved. A well-structured training program, tailored to the needs of the organization and the existing competencies of the personnel, can make a big difference. For entry-level positions, a focused 6-month program might cover foundational skills, while more advanced roles—especially those integrating AI and sophisticated building systems—could require a longer commitment of 12 to 18 months or more. This approach ensures that staff not only learn the necessary technical skills but also understand how to apply them effectively within the context of their operations. Ultimately, investing in a flexible, competency-based training model can help organizations build a more capable workforce ready to meet the demands of modern facilities management.
- AI can be revolutionary, especially for innovation for green buildings. Is there any plan for any Federal entity (GBAC, GSA, DOE, NREL, etc.) to post (and keep up to date) a list of AI tools and descriptions available for those in the design and building world? There are more and more software tools available. For me as a small practitioner, it would be GREAT to access a comprehensive list that I can count on of what tools are available, what they can do, and some instructions on how to use them (learning what software to use and how to use it is a major barrier to using all the great software).
- What’s the current AI tools that assist or can assist GSA for better energy management?
- Could anyone estimate the cost to get organizations to start on an AI training model? How should one budget?
- How will any owner/user trust that any false narrative or misinformation is not auto-processed as truth by AI nor AI implicit bias built-in, or other procedural bias in prompt development (ala leading survey questions being created that do not address important issues of equity or justice or human impacts)?
General Questions:
- How could AI check for and identify “discrimination” or “advance equity” per the Executive Orders?
- AI can review policy or program documents for equity and inclusion by using natural language processing to detect bias, analyze representation, and forecast impacts on different groups. It can suggest language improvements, highlight gaps in equity, and propose alternatives to ensure underrepresented groups are addressed. Additionally, AI can benchmark the document against established inclusion standards and recommend ways to align with best practices. This process helps ensure policies are inclusive both in language and practical outcomes.
Closing Remarks and Next Steps
- It is important to have human champions in AI initiatives. Today’s sessions advocated for starting with pilot projects to train people, models, and data, acknowledging that mistakes will occur along the way. These mistakes are essential for learning and iteration, which are necessary to establish the guidelines required by executive orders. The key takeaway is to begin these efforts while remaining mindful of the process.
- The need for equity and inclusion in AI was a common topic brought up multiple times in questions/discussions.
- At the enterprise level, there are many Chief Technology Officers and Chief Information Officers exploring the deployment of large language models using API connections. Companies like OpenAI, Salesforce, and SAP are offering pre-trained models with guardrails for enterprises to integrate their private data. While the technology costs can be significant, there is still much to learn about implementing AI solutions at scale. The hope is that attendees feel inspired and comfortable trying these solutions, as there is a need for champions within organizations to lead pilot projects.
- GBAC will next meet for their regularly scheduled bi-weekly meetings on October 10 from 3:00-4:00 pm EST.
Any additional questions should be sent to GBAC@gsa.gov and Michael Bloom (michael.bloom@gsa.gov). Put “AI and Federal Buildings” in the subject line.
Disclaimer - These notes represent the deliberations of a task group to an independent advisory committee, and as such, may not be consistent with current GSA or other Federal agency policy. Mention of any product, service or program herein does not constitute endorsement.
Glossary
Machine Learning: ML is a subset of AI focused on systems that learn from data, identifying patterns, and making decisions with minimal human intervention. ML models are rained on large datasets to improve over time in tasks like predictions, classifications, and optimization.
Generative AI: GenAI is a branch of AI that creates new content, such as text, images, or music, based on learned patterns from large datasets. GenAI models, like GPT, are trained to generate new outputs that resemble the data they were trained on, from reports to realistic simulations.
Digital twin: a virtual representation of a physical object, system, or process. It uses real-time data and simulations to mirror the characteristics and behaviors of its physical counterpart, allowing for analysis, monitoring, and optimization. Digital twins are commonly used in industries like manufacturing, healthcare, and smart cities to improve efficiency, predict performance issues, and enhance decision-making. By integrating IoT sensors and data analytics, they can provide insights into operations, helping organizations to simulate changes and foresee outcomes before implementing them in the real world.
Building Information Modeling (BIM): a digital process that involves creating and managing 3D models of buildings and infrastructure throughout their lifecycle. BIM integrates various aspects of a project, including architectural, structural, and MEP (mechanical, electrical, and plumbing) designs, into a single model.
Hallucination: Gen AI Tool perceives patterns or objects that are imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
Inference: It is the process that a trained machine learning model uses to draw conclusions from brand-new data. So, inference is an AI model in action.
Retrieved Augmented Generation: Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources. It enhances large language model prompts with relevant data for more practical accurate response and information sources more specific to businesses.
Stable diffusion: It is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, cultivates autonomous freedom to produce output imagery.
AI Outcome-Based Approach: A design approach that focuses on the outcomes for the users, community, and environment from the start of the design process.
The Macleamy Curve: Cost of design changes goes up while the ability to impact sustainability outcomes decreases over time.
AI Risk Management Framework: In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.