An alternative access method for the same information available from the CNI-ANNOUNCE listserv.
Sending on behalf of our ELI colleagues, an invitation to register for the fall online ELI focus session “Leadership for Teaching and Learning: More Choices, More Complexities, New Models.”
Colleagues, just a reminder that we have recruited a terrific set of speakers for the ELI online focus session (Sept 15-16). Among them are Jen Stringer (UC Berkeley), Vince Kellen (University of Kentucky), and EDUCAUSE’s new president, John O’Brien. The attached PDF contains the complete program.
Today, responsibility for support of teaching and learning is shared across different campus organizations. For this focus session, we will explore the issues and opportunities for leadership in teaching and learning in this new context. Each of our presenters will be contributing a set of leadership lessons learned to what we are calling a leadership toolbox, which will be a valuable resource for all attendees.
We hope you can join us! Feel free to email me should you have questions about any facet of the focus session.
Director, EDUCAUSE Learning Initiative
The Institute of Museum and Library Services (IMLS) has issued a report prepared by OCLC Research summarizing their meeting of thought leaders to discuss “Learning in Libraries.” The focus meeting was held in Kansas City in May.
One of the themes from the report urges libraries to “design participatory learning programs that demonstrate innovation and scalability.” The panel I moderated at the meeting highlighted that theme and featured lively presentations on projects from speakers from the University of Nevada, Reno, the Westport Library, and Library as Incubator Project. The presentations are summarized in the report.
IMLS suggests that the report might be useful background for those preparing proposals for some of their grant programs.
You can download the report from a link on a blog post from IMLS: http://blog.imls.gov/?p=5999
The New Media Consortium (NMC) has released the Horizon Report – Library Edition 2015. It is available at
The report identifies trends accelerating technology adoption in academic and research libraries in three time “horizons” and also notes challenges impeding technology adoption and important developments in technology for academic and research libraries.
Some of the findings are that “Increasing Value of the User Experience” and “Prioritization of Mobile Content and Delivery” are key short-term impact trends driving changes in academic and research libraries over the next one to two years. The “Evolving Nature of the Scholarly Record” and “Increasing Focus on Research Data Management” are mid-term impact trends expected to accelerate technology use in the next three to five years; and “Increasing Accessibility of Research Content” and “Rethinking Library Spaces” are long-term impact trends, anticipated to impact libraries for the next five years or more. The report contains much more, including links to initiatives in libraries.
Disclosure – I participated on the advisory panel for this report.
I wanted to share this announcement of an interesting preliminary report of a broad based study on access practices for born-digital collections in cultural memory organizations, which I think will be of interest to CNI-announce readers.
For the past year, a research team has been working to on a project to map the landscape of born-digital access. The team surveyed over 200 cultural heritage institutions regarding their access policies and procedures.
The team is preparing to share initial findings at a session at the Society of American Archivists Annual Meeting (http://sched.co/2y9i), and we thought that you might be interested, too. The document outlining our research is available here: http://bit.ly/hackbdaccess-report
We welcome any feedback from your membership, and many thanks to those of you who participated in the survey. If you’d like to follow SAA and the session on Twitter, please keep an eye on the hashtags #saa15 #s110 next Thursday!
Digital Collections Archivist
Institutions wish to enhance and promote their reputation to attract funders and faculty and to increase their ranking. Since universities change their official names as part of branding activities, academic departments change their names to reflect new curricular emphasis, and schools merge with or separate from parent institutions, institutional identifiers are crucial to accurately represent scholars’ affiliations both on their output and on grant applications. Institutions may not realize they already have such an institutional identifier, ISNI, and that this identifier has already been disseminated, used by ORCID and included in VIAF and Wikidata. In this presentation from CNI’s spring 2015 meeting, Karen Smith-Yoshimura of OCLC Research summarizes the current work of a task force on use cases and challenges of representing organizations in the ISNI database.
Challenges Presented by Institutional Identifiers is now available online:
and on Vimeo: https://vimeo.com/136100763
In this presentation from CNI’s spring 2015 meeting, Jon Cawthorne (West Virginia), Vivian Lewis (McMaster) and Lisa Spiro (Rice) present key results from a pilot global benchmarking study on digital scholarship expertise. The project involved visiting leading digital humanities and digital social science organizations in several countries and conducting interviews with research staff, faculty, graduate students, and administrators in order to understand the core skills required for digital scholarship and the characteristics of organizations that cultivate these skills.
Building Expertise to Support Digital Scholarship: A Global Perspective is now available online:
and on Vimeo: https://vimeo.com/134886596
On June 4, 2016 Columbia University convened a conference on Web Archiving Collaboration: New Tools and Models. Presentations and videos from the meeting are now available, and they are linked to the conference agenda here:
I want to share a pointer to a paper published in PLoS ONE July 24, 2015 titled “Sizing the Problem of Improving Discovery and Access to NIH-Funded Data: A Preliminary Study” by Kevin Read et al.
This is an excellent example of work that is badly needed to help us ot better understand the scale of the challenge of managing research data to facilitate its discovery and reuse by other scholars, and to illuminate the roles that repositories of various types may play in this effort. I’ve reproduced the abstract below.
This study informs efforts to improve the discoverability of and access to biomedical datasets by providing a preliminary estimate of the number and type of datasets generated annually by research funded by the U.S. National Institutes of Health (NIH). It focuses on those datasets that are “invisible” or not deposited in a known repository.
We analyzed NIH-funded journal articles that were published in 2011, cited in PubMed and deposited in PubMed Central (PMC) to identify those that indicate data were submitted to a known repository. After excluding those articles, we analyzed a random sample of the remaining articles to estimate how many and what types of invisible datasets were used in each article.
About 12% of the articles explicitly mention deposition of datasets in recognized repositories, leaving 88% that are invisible datasets. Among articles with invisible datasets, we found an average of 2.9 to 3.4 datasets, suggesting there were approximately 200,000 to 235,000 invisible datasets generated from NIH-funded research published in 2011. Approximately 87% of the invisible datasets consist of data newly collected for the research reported; 13% reflect reuse of existing data. More than 50% of the datasets were derived from live human or non-human animal subjects.
In addition to providing a rough estimate of the total number of datasets produced per year by NIH-funded researchers, this study identifies additional issues that must be addressed to improve the discoverability of and access to biomedical research data: the definition of a “dataset,” determination of which (if any) data are valuable for archiving and preservation, and better methods for estimating the number of datasets of interest. Lack of consensus amongst annotators about the number of datasets in a given article reinforces the need for a principled way of thinking about how to identify and characterize biomedical datasets.
Last week, the Obama administration issued an executive order creating a National Strategic Computing Initiative to “maximize the benefits of high-performance computing research, development and deployment”. The executive order, which is not lengthy, is well worth reading; it both establishes a series of objectives and defines roles and responsibilities among a large number of government agencies involved in the program.
The executive order is here:
And there is also a blog post from Tom Kalil and Jason Miller providing additional context here:
One point I found particularly interesting. While the “objectives” section of the order speaks, as one might expect, of exascale computing systems, it also specifically identifies as an objective “Increasing coherence beween the technology base used for modelling and simulation and that used for data analytic computing.” This is a disconnect that has been growing increasingly evident with the rise of”big data” and “data analytics” in recent years.
In the UK, the Jisc has been doing some great work on learning analytics that doesn’t seem to have gotten wide visibility beyond the UK yet; I wanted to particularly share their “Code of Practice for Learning Analytics” which addresses privacy and other ethical issues involved in the deployment of learning analytics. While of course some of this work is adapted for specific UK legal requirements, the broader principles are highly relevant. See
There’s also a very helpful literature review that they developed as part of the effort, which is at:
For a broad overview of Jisc’s work in the learning analytics area, and pointers to other material, see