CNI: Coalition for Networked Information

  • About CNI
    • Membership
    • Staff
    • Steering Committee
    • CNI Awards
    • History
    • CNI News
  • Membership Meetings
    • Next Meeting
    • Past Meetings
    • Future Meetings
  • Topics
  • Events & Projects
    • Membership Meetings
    • Workshops & Projects
    • Other Events
    • Event Calendar
  • Resources
    • CNI Publications
    • Program Plan
    • Pre-Recorded Project Briefing Series
    • Videos & Podcasts
    • Executive Roundtables
    • Follow CNI
    • Historical Resources
  • Contact Us

Strategies for Acquiring Access to Generative AI Services and Systems

Home / Topics / Artificial Intelligence / Strategies for Acquiring Access to Generative AI Services and Systems

February 5, 2026

Taylor, Shawna, and Diane Goldenberg-Hart. Strategies for Acquiring Access to Generative AI Services and Systems. Report of CNI Executive Roundtables held October 2024 and March 2025. Coalition for Networked Information, February 2026. https://doi.org/10.56561/JQFD9900

Download PDF


Introduction

On October 16, 2024, and March 6, 2025, CNI held remote Executive Roundtable sessions on the Strategies for Acquiring Access to Generative Artificial Intelligence (AI) Services and Systems, each led by then CNI Executive Director Clifford Lynch. CNI Executive Roundtables are exploratory and qualitative, bringing together institutional partners to discuss key digital information issues and their strategic implications. These sessions included representatives from more than 20 CNI member organizations. Participants included senior library administrators, chief information and research officers, directors of teaching and learning centers, and digital transformation officers.

This report documents these specific conversations rather than attempting to generalize about the state of the broader higher education landscape. Additionally, the AI landscape is evolving quickly, and much has changed since the roundtables took place; this report should be considered a snapshot of some of the issues and strategies organizations were exploring at that time.[1],[2]

The timing of these discussions reflected pressing institutional needs for clarity around complex generative AI procurement and strategic planning challenges within higher education. Generative AI is unlike traditional technology acquisitions, which typically fit into established organizational structures: central IT units manage computational infrastructure, libraries handle information resources and content licensing, and teaching centers coordinate instructional technology. In contrast, generative AI systems defy such neat categorization. These systems represent a unique hybrid of content, computation, and process, making it challenging to assign them to any single organizational unit or determine clear ownership within existing structures.

The roundtables aimed to clarify current approaches of large-scale institutional-level deployment of generative AI-based systems and services of CNI members, specifically the selection, acquisition, financing, and contracting of this technology. During the discussions, participants described challenges ranging from technical implementation strategies and governance frameworks to institutional readiness and risk management approaches. The conversations revealed significant variation in institutional approaches to generative AI access. In summarizing the first roundtable discussion during a CNI webinar, Lynch noted that institutional consensus was among the weakest he had observed in an emerging technology.[3]

The lack of consensus is perhaps unsurprising given that mass experimentation with generative AI systems only began with ChatGPT’s release in November 2022, marking the starting point for user and institutional engagement. Some institutions felt compelled to act quickly as faculty, researchers, students, and the broader campus community began using commercial services under licensing terms that lacked meaningful privacy protections or administrative oversight. This created urgent pressure for institutions to provide more secure and controlled environments while still allowing for experimentation. Some institutions negotiated enterprise-level agreements to keep sensitive information within institutional boundaries, while others implemented their own large language models (LLMs). Participants also described broader institutional strategies involving platforms to access multiple LLMs and sandboxes for safe experimentation, supported by governance frameworks.

We have clustered our summary of institutional perspectives into three general areas that combine related themes. We begin by examining teaching and learning together with administrative and research applications. In the second section, we consider governance in conjunction with funding. And finally, privacy, security, and training data are addressed together in the third section.

Teaching & Learning, Administrative, and Research Applications

Participants described institutional AI initiatives as emerging across three interconnected domains: teaching and learning, administrative, and research applications. While these areas often blur in practice, considering them separately helps clarify both shared challenges and domain-specific issues.

Teaching and Learning Applications

Generative AI’s public release in late 2022 significantly affected teaching and learning, with academic integrity concerns frequently driving immediate responses. Initial efforts largely centered on preventing academic misconduct and establishing clear policies, leading to guidelines for appropriate student use of these tools. Several institutions, after responding to these immediate needs, reported a shift in focus to explore how AI tools might enhance instruction rather than concentrating solely on restrictions. Students reportedly sought faculty guidance on what the institution considered appropriate use as well as how to effectively leverage these tools for learning. Faculty responses varied considerably, with some expressing enthusiasm for the pedagogical possibilities while others emphasized concern with preserving academic rigor. Libraries emerged as active partners in this work, providing faculty with pedagogical resources and supporting students with AI tutors, discovery tools, and dedicated learning spaces.

Administrative Applications

Administrative applications of generative AI systems cover many operational functions, including student services and student admissions. One common example is chatbots trained on institution-specific information pulled from existing systems, knowledge bases, and policies to answer user questions. Many institutions have deployed chatbots, though the resources available to any given unit to create and support them may vary within the same institution. Several participants indicated concerns about equity and access to AI tools by under-resourced units.

Research Applications

For this report, “research applications” has a specific definition: developing, training, or adapting AI models as part of research itself—not simply using tools for administrative tasks like summarizing articles or formatting citations. Several well-resourced institutions reported making significant infrastructure investments to advance the use of AI in research, including rethinking their high-performance computing strategies and investing in computational facilities for model training. Due to computational constraints, one institution shifted its research support approach away from large local model training toward using smaller pre-trained models and downloading open-source models to fine-tune for specific research needs. Additionally, libraries at several institutions developed, or were considering developing, AI applications to enhance access to and processing of special collections.

Below are interventions implemented by participating institutions to mature the AI practices in teaching and learning, administrative, and research applications, grouped by frequency of adoption:

All or almost all participating institutions

  • Enhanced academic integrity policies and developed specific guidelines addressing student misuse of AI tools.

Most participating institutions

  • Leveraged AI capabilities already embedded within purchased enterprise platforms such as Workday, Zoom, and Microsoft 365.
  • Developed faculty support initiatives to varying degrees, typically through teaching and learning centers, including AI literacy programs, specialized workshops on pedagogical applications, training on detection tools, and communities of practice for sharing strategies.
  • Held AI workshops for students, faculty, and staff, which drew strong interest. One institution saw attendance rates nearly double after introducing AI topics, with notably higher faculty and staff participation than typical.
  • Offered staff training, which varied significantly. One institution issued mandatory AI workplace training covering security, privacy, and ethical concerns while still encouraging experimentation. Another partnered with a vendor for staff professional development. Some institutions launched staff productivity pilots, experimenting with different AI tools and their various applications to assess staff productivity gains.

Some institutions

  • Had programs for developing AI tools for research and instruction, including semantic search and discovery systems for collections, AI tutors customized with course materials, and AI-enhanced programming instruction through Jupyter notebooks.
  • Implemented retrieval-augmented generation (RAG) architectures, which connect AI systems to institutional data repositories, leveraging internal data not included in the original training data to customize chatbots to local context.
  • Developed specialized applications for their domains, ranging from teaching students how to use AI for job searching to formatting messy application data for human review.
  • Used AI to aid in software development, which emerged as an early administrative success story. Multiple institutions reported that developers found AI coding assistants beneficial, with one noting that every developer now had access to a paid tool. Some had begun exploring multiple platforms for comparison.

Governance and Funding

Beyond specific applications, institutions faced broader questions concerning how to organize, govern, and fund their AI initiatives. Where senior leadership provided clear directives and resource commitments, institutions reported having formalized AI committees or task forces, with representation from an array of campus partners, including administrative leadership, faculty, IT, libraries, research administration, and teaching and learning centers. At several institutions without top-down directives, staff from different departments formed cross-functional teams, advancing AI projects independently. These informal networks required coordination and communication across units to avoid pitfalls such as procurement duplication.

Procurement of AI tools generally follows traditional IT approaches, with the chief information officer and IT departments taking the lead in contract negotiations and vendor relationships, often leveraging existing cloud computing partnerships. While AI contracts align more closely with standard IT agreements than with, for instance, library license agreements, deployment typically involves multiple campus stakeholders, such as central IT, the library, research computing, the provost’s office, and the digital transformation office (at institutions where they exist), which often serves as the implementation strategy coordinator.

The funding landscape presented its own challenges. Many institutions received substantial one-time funding from senior-level leadership, including commitments from trustees, regents, provosts, academic senates, and presidents. However, operational funding for sustained services remained largely uncertain. While some institutions had begun transitioning from pilot funding to operational budget requests, most were still deciding between central funding (similar to other infrastructure investments) or through cost recovery models. Multiple institutions noted challenges establishing success metrics or measuring return on investment for generative AI tools, with some reporting that early hoped-for productivity gains did not materialize. The prevalence of pilot programs and phased approaches likely reflects these assessment difficulties. Pilots allow institutions to defer budget and implementation decisions while they determine how to evaluate value.

Varied vendor pricing models further complicated these decisions, with institutions assessing licensing and enterprise agreements. Per-seat models raised questions about who at the institution should receive access. Some vendors had shown a willingness to negotiate alternatives to per-seat licensing, such as usage licensing, though these contracts were typically negotiated for only one or two years. Although inconsistent vendor pricing presented many challenges, vendor solutions proved to be more affordable than in-house solutions, which require substantial investments in specialized staff and GPU infrastructure. Still, significant cost differentials exist across vendors. Tools integrated into existing enterprise licenses (such as Microsoft Copilot, Google NotebookLM, or Gemini) seemed broadly affordable, while standalone AI services requiring separate contracts (such as ChatGPT or Claude) appeared cost-prohibitive to scale for most institutions.

Several institutions explored alternative funding approaches, including philanthropic support for infrastructure, federal grants, endowment funding for staff development, consortium arrangements for vendor negotiations, and seed grants for faculty and staff. However, these efforts operate against a backdrop of increasing budgetary constraints. Individual departments, including libraries, expressed uncertainty about their ability to sustain AI tool funding independently, with some noting they would continue as long as possible while hoping other units would assume costs for broader access. Many participants expressed concern that pilot funding would disappear without sustained institutional commitment, as costs exceed available budgets.

In terms of licensing, libraries reported encountering new AI-related clauses in license agreements for existing resources when vendors began integrating AI functionality into their products. With new features, including enhanced search and discovery capabilities, vendors are introducing new contractual language governing content usage that requires careful monitoring and review.[4]

Some institutions reported developing evaluation frameworks to assess both new and existing tools that added AI capabilities. Some common themes involving governance and funding included:

  • Limited staffing availability for contract and security/privacy review, resulting in lengthy implementation delays at several institutions.
  • A preference for leveraging existing relationships with established technology providers rather than navigating procurement processes with smaller, newer companies in the rapidly changing AI landscape.
  • Substantial infrastructure investments by well-resourced institutions in large-scale GPUs, new or expanded data centers, and high-performance computing capabilities. One participant characterized the ability to perform local AI processing as the “ultimate in data privacy,” particularly for sensitive research data.
  • Use of open source tools like Open WebUI, Ollama, and LiteLLM to provide communities with opportunities to experiment with and compare various models. One organization reported incorporating an accounting layer into its system for cost-tracking.

Privacy, Security, and Training Data

A key consideration driving institutional AI strategies is balancing the benefits of third-party AI services against the risks of sharing sensitive data with external vendors. Participants noted that even vendor privacy assurances require trusting institutional information to systems outside direct institutional control. To address this concern, a few organizations implemented gateway platforms—unified interfaces (which may be considered middleware) for accessing multiple LLMs—which provide a single point of access to multiple LLMs while keeping institutional data within protected environments.[5] These platforms allow users to experiment with different models, while IT administrators maintain centralized authentication, retain governance controls over data usage, implement access policies for different risk levels, monitor usage patterns, and ensure that institutional content is not accessed by or used to train external AI systems. DeepSeek, for instance, was largely prohibited by IT departments, yet one institution chose to permit its use within their controlled environment.

Training data considerations also arose in the discussions, with participants noting that they were largely adopting commercial foundation models, trained on a variety of datasets (scraped websites, newspapers, books, etc.), but supplementing them with local institutional content such as policies, procedures, and knowledge bases. Early efforts to train models using discipline-specific scholarly content were underway.

Conclusion

These Executive Roundtable discussions sought to understand how some CNI member organizations were approaching the selection, acquisition, and funding of generative AI systems and services. Participants from over 20 institutions revealed a complex landscape characterized by significant variation in approaches. While institutions shared common challenges around governance, funding sustainability, and data control, their responses varied considerably, often based on resource levels, organizational structures, and strategic priorities. Most institutions recognized the importance of aligning AI efforts across departments, though approaches to coordination differed.

Financial sustainability presents one of the most pressing concerns, with many institutions relying on one-time funding allocations while lacking clear pathways for operational budget integration when pilot funds expire. Additionally, AI systems require investments in content, computation, and process, and they resist traditional organizational structures, creating ongoing governance challenges. Balancing the convenience of third-party AI services against institutional data security and privacy requirements is another central area of concern.

The discussions highlighted how institutions are adapting to generative AI adoption through increased collaboration and knowledge sharing, with many participants drawing parallels to the COVID-19 pandemic responses in terms of rapid institutional change and cross-campus coordination. Several participants expressed concerns that their institutions were “behind” in AI adoption compared to peers, perhaps reflecting the broader challenge of how institutions assess readiness and benchmarking strategies when resource levels, infrastructure, and priorities differ significantly.


Endnotes

[1] Themes explored in this report were largely shaped by Clifford Lynch’s initial analysis between the first and second roundtables. Initial observations and readouts after the first roundtable include Lynch’s presentation at the EDUCAUSE Annual Conference, “Research University Strategies for Implementing Generative AI Systems,” October 24, 2024, https://events.educause.edu/annual-conference/2024/agenda/research-university-strategies-for-implementing-generative-ai-systems.

[2] These themes were further elaborated by Lynch in a public CNI webinar, “Research University Strategies for Implementing Generative Artificial Intelligence Systems,” November 26, 2024, https://www.youtube.com/watch?v=v-E3Mn_iHLU.

[3] Lynch, “Research University Strategies for Implementing Generative Artificial Intelligence Systems.”

[4] An example of this trend is described in this Oct. 24, 2025 blog post by Colorado State University Dean of Libraries Karen Estlund: https://source.colostate.edu/guest-column-when-publishers-fear-of-ai-prohibits-basic-uses/

[5] Examples of AI gateway access platforms mentioned by participants include Clarity AI (https://clarityai.ai/generative-ai-in-operations-tools-projects) and Amplify GenAI (https://www.amplifygenai.org/). More recently, in July 2025, Internet2 announced the launch of NET+ Portkey, a community-vetted AI gateway service for research and education communities (https://internet2.edu/internet2-launches-net-portkey-ai-gateway-service-for-research-and-education-community/).

CNI Executive Roundtables bring together a group of campus partners to discuss a key digital information issues and their strategic implications. The roundtables build on the theme of collaboration that is at the foundation of the Coalition; they serve as a forum for frank, unattributed intra- and inter-institutional dialogue on digital information issues and their organizational and strategic implications. In addition, CNI uses roundtable discussions to inform our ongoing program planning process.

The Coalition for Networked Information (CNI) is a joint project of the Association of Research Libraries (ARL) and EDUCAUSE that drives the future of scholarship and education through the creative use of information technology. With a dynamic membership of over 200 organizations spanning higher education, publishing, information technology, scholarly societies, professional organizations, foundations, and leading library institutions and organizations, CNI is at the forefront of shaping how knowledge is discovered, shared, and advanced. Learn more at www.cni.org.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Addendum 1

CNI Executive Roundtable

Call for Expressions of Interest

October 16, 2024, 1:00–3:30 pm ET

(additional session on October 17 may be added)

Strategies for Acquiring Access to Generative AI Services and Systems

Virtual Meeting (Zoom)

Deadline: September 27

The current crop of generative AI-based systems (LLMs, chatbots, and related systems) represent an unprecedented melding of content and computational services. The goal of this Executive Roundtable is to try to capture the current approaches to selecting, acquiring, financing, and contracting for such services among the CNI member institutions that are pursuing institutional-level or other large-scale procurement of such services; we are also interested in models that emphasize partnerships with other organizations (including commercial companies) and/or mixtures of procurement and local customization and implementation. In particular, we are looking for insights into the following questions:

  • Who is taking the lead in procurements: Chief Information Officer (CIO), Library leadership, or other parties? How are all of the other interested parties (e.g., Chief Information Security Officer (CISO), Chief Privacy Officer (CPO), faculty leadership, Chief Research Officer (CRO), general counsel) being coordinated?
  • Who are the target user communities for these services?
  • How are these services being funded? Who is paying for them? What budget lines do they involve?
  • How are security and privacy concerns analyzed, documented, and communicated to the campus community? What are the security and privacy constraints and objectives guiding the campus strategies?
  • What structures are in place to design and implement support services for the campus community? How do these differ among research, teaching, and administrative sectors?
  • How are institutions thinking about the selection of training data and the training of these models? Are there plans for institution and/or disciplinary “fine-tuning” training of models? Are vendors being asked to do specific pre-training of models? Are there interactions with institutional content licensing arrangements here?

Any CNI institutional representative may apply to participate in this Roundtable, and either one individual or a team of up to three individuals who have different roles (e.g. a library director, a CIO, a head of research computing, or a chief research officer) can represent the institution; we recognize that for this particular topic, some more novel roles may be appropriate. We particularly welcome the participation of such teams. If multiple individuals are applying, please coordinate and submit only one application per institution with all names and email addresses. If you would like to have more than three people participate please contact us. In order to have an in-depth discussion, participation in the Roundtable will be limited to approximately 15 institutions; if there is sufficient interest, we’ll offer additional sessions.

We are also trying something new in conjunction with this Executive Roundtable. Interested institutions who cannot participate in one of the scheduled discussions are invited to complete the expression of interest form and indicate their willingness to submit institutional materials at a later date if requested; we may select some of these for interviews to follow up on these submissions.

Cliff Lynch, CNI executive director, will moderate this session and provide some framing remarks, and then participants will have an opportunity to discuss issues with peers from other institutions. The Roundtables build on the theme of collaboration that is at CNI’s foundation. We want to promote institutional dialogue and inter- and intra-institutional information exchange on digital information issues while informing CNI’s planning process. We will disseminate a summary of the issues that emerge from the Roundtable, but in order to encourage frank discussion, there will be no individual or institutional attribution of statements without prior explicit permission from the relevant party. Reports from previous Executive Roundtables are available: cni.org/tag/executive-roundtable-report.

To express interest in participating, please complete the form at:

https://CNI.formstack.com/forms/strategies_for_acquiring_access_to_generative_ai_services_and_systems by the end of the day on Sept. 27 (if more than one person per institution wishes to participate, please coordinate and complete only one form). We will accept a maximum of about 15 institutions for each Roundtable session considering criteria such as the work the institution is doing, the composition of the institutional team, and the balance of institutions (type, geographic area, etc.) to determine who will attend. We will contact you by Oct. 2nd on the status of your expression of interest. We apologize in advance that we may have to turn away some individuals who express interest. If you have any questions about the Roundtable, please contact Diane Goldenberg-Hart at diane@cni.org.


Addendum II

We invite you to express interest in participating in a follow-on to the fall 2024 Executive Roundtable Strategies for Acquiring Access to Generative AI Services & Systems. This follow-on convening is intended to solicit input from additional institutions.

In fall 2024, CNI hosted an initial convening of a small group of CNI members on the topic Strategies for Acquiring Access to Generative AI Services and Systems. CNI Executive Director Clifford Lynch provided an early readout the week after the roundtable at the EDUCAUSE fall meeting and subsequently hosted a webinar for the CNI community providing an overview and synthesis of what we heard from participants at that time; the video of his report is here: https://youtu.be/v-E3Mn_iHLU.

We expect to produce a consolidated summary report covering both convenings.

We now invite additional institutional members that did NOT participate in the fall 2024 roundtable on the same topic to express interest in participating in a follow-on conversation on this issue.

Topic: Follow-on to Strategies for Acquiring Access to Generative AI Services & Systems (fulldescription below)

Date: March 5, 2025*

Time: 2:00-4:30pm ET

Location: Online, via Zoom

*If there is sufficient demand, we will add a session on March 6 (2:00-4:30pm ET) – please indicate your availability on the form.

EXPRESSIONS OF INTEREST ARE NOW BEING ACCEPTED THROUGH FEB. 21, 2025:

https://cni.formstack.com/forms/follow_on_strategies_for_acquiring_access_to_generative_ai_services_and_systems

Please coordinate and complete only one form per institution.

Any CNI institutional representative (whose organization did not participate in the fall 2024 Executive Roundtable on the same topic) may apply to participate in this Roundtable, and either one individual or a team of up to three individuals who have different roles (e.g. a library director, a CIO, a head of research computing, or a chief research officer) can represent the institution; we recognize that for this particular topic, some more novel roles such as faculty leadership participating in campus-wide committees may be appropriate. We particularly welcome the participation of such teams.

See below for more details regarding participation.

FULL DESCRIPTION

CNI Executive Roundtable

Call for Expressions of Interest

FOLLOW-ON to the Fall 2024 Executive Roundtable:

Strategies for Acquiring Access to Generative AI Services & Systems

March 5, 2025, 2:00–4:30 pm ET

(additional session on March 6 may be added)

Virtual Meeting

Deadline: February 21

THIS IS A FOLLOW-ON TO THE CNI FALL 2024 EXECUTIVE ROUNDTABLE ON THE SAME TOPIC; participation is solicited from CNI institutional members that did NOT participate in the fall 2024 Executive Roundtable.

The current crop of generative AI-based systems (LLMs, chatbots, and related systems) represent an unprecedented melding of content and computational services. The goal of this Executive Roundtable series is to try to capture the current approaches to selecting, acquiring, financing, and contracting for such services among the CNI member institutions that are pursuing institutional-level or other large-scale procurement of such services; we are also interested in models that emphasize partnerships with other organizations (including commercial companies) and/or mixtures of procurement and local customization and implementation.

In particular, we are looking for insights into the following questions:

  • Who is taking the lead in procurements: Chief Information Officer (CIO), Library leadership, or other parties? How are all of the other interested parties (e.g., Chief Information Security Officer (CISO), Chief Privacy Officer (CPO), faculty leadership, Chief Research Officer (CRO), general counsel) being coordinated?
  • Who are the target user communities for these services?
  • How are these services being funded? Who is paying for them? What budget lines do they involve?
  • How are security and privacy concerns analyzed, documented, and communicated to the campus community? What are the security and privacy constraints and objectives guiding the campus strategies?
  • What structures are in place to design and implement support services for the campus community? How do these differ among research, teaching, and administrative sectors?
  • How are institutions thinking about the selection of training data and the training of these models? Are there plans for institution and/or disciplinary “fine-tuning” training of models? Are vendors being asked to do specific pre-training of models? Are there interactions with institutional content licensing arrangements here?

Participation

Any CNI institutional representative (whose organization did not participate in the fall 2024 Executive Roundtable on the same topic) may apply to participate in this Roundtable. Either one individual or a team of up to three individuals who have different roles (e.g. a library director, a CIO, a head of research computing, or a chief research officer) can represent the institution; we recognize that for this particular topic, some more novel roles may be appropriate. We particularly welcome the participation of such teams. If you would like to have more than three people participate please contact us. In order to have an in-depth discussion, participation in the Roundtable will be limited to approximately 15 institutions; if there is sufficient interest, we’ll offer an additional Roundtable.

Cliff Lynch, CNI executive director, will moderate this session and provide some framing remarks, and then participants will have an opportunity to discuss issues with peers from other institutions. The Roundtables build on the theme of collaboration that is at CNI’s foundation. We want to promote institutional dialogue and inter- and intra-institutional information exchange on digital information issues while informing CNI’s planning process. We will disseminate a summary of the issues that emerge from the Roundtable, but in order to encourage frank discussion, there will be no individual or institutional attribution of statements without prior explicit permission from the relevant party. Lynch hosted a webinar providing an overview and synthesis of what we heard from participants in the fall 2024 convening on the same topic; the video of his report is here: https://youtu.be/v-E3Mn_iHLU.

To express interest in participating, please complete the form at

https://cni.formstack.com/forms/follow_on_strategies_for_acquiring_access_to_generative_ai_services_and_systems by the end of the day on Feb. 21 (if more than one person per institution wishes to participate, please coordinate and complete only one form). We will accept a maximum of about 15 institutions for each Roundtable session considering criteria such as the work the institution is doing, the composition of the institutional team, and the balance of institutions (type, geographic area, etc.) to determine who will attend. We will contact you by Feb. 27 on the status of your expression of interest. We apologize in advance that we may have to turn away some individuals who express interest. If you have any questions about the Roundtable, please contact Diane Goldenberg-Hart at diane@cni.org.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on X (Opens in new window) X

Filed Under: Artificial Intelligence, Cyberinfrastructure, Emerging Technologies, Information Access & Retrieval, Intellectual Property, Other Publications Related to CNI, Publications, Repositories, Research Data Management, Teaching & Learning
Tagged With: Executive Roundtable Reports, Publications/Reports/Presentations

Last updated:  Thursday, February 5th, 2026

 

Contact Us

1025 Connecticut Ave, NW #1200
Washington, DC 20036
202.296.5098

Contact us
Copyright © 2026 CNI

  • Copyright Policy
  • Privacy Policy
  • Site map

Keeping up with CNI

CNI-ANNOUNCE is a low-volume electronic forum used for information about the activities and programs of CNI, and events and documents of interest to the CNI community.
Sign up

Follow CNI

LinkedInBlueSkyFacebookTwitterYouTubeVimeoMastodon

A joint project