innovation in metadata design, implementation & best practices

Metadata Training Resources

User Guidance

This page:

Other useful pages:

As part of its mission, the Dublin Core Metadata Initiative is committed to education and training in the design of languages of description and in best practices in the daily use of those languages. To this end, DCMI provides ongoing training through the webinar series, tutorials at both regional meetings and its International Conference on Dublin Core and Metadata Applications. Additional training resources are available through DCMI Community submissions.

You can learn more about metadata and DCMI by exploring the pages listed in the menu bar above: the Home page, Metadata Basics, Specifications (this page), Community and Events, and About Us.

2017 Webinars

DCMI/ASIS&T Joint Webinar:

How to Design & Build Semantic Applications with Linked Data

Webinar Date: Wednesday, 14 June 2017, 10:00am-11:15am EDT (UTC 14:00:00 - World Clock:

Abstract: This webinar will demonstrate how to design and build rich end-user search and discovery applications using Linked Data. The Linked Open Data cloud is a rapidly growing collection of publicly accessible resources, which can be adopted and reused to enrich both internal enterprise projects and public-facing information systems.

The webinar will use the Linked Canvas application as its primary use-case. Linked Canvas is an application designed by Synaptica for the cultural heritage community. It enables high-resolution images of artworks and artifacts to be catalogued and subject indexed using Linked Data. The talk will demonstrate how property fields and relational predicates can be adopted from open data ontologies and metadata schemes, such as DCMI, SKOS, IIIF and the Web Annotation Model. Selections of properties and predicates can then be recombined to create Knowledge Organization Systems (KOS) customized for business applications. The demonstration will also illustrate how very-large-scale subject taxonomies and name authority files, such as the Library of Congress Name Authority File, DBpedia, and the Getty Linked Open Data Vocabularies collection, can be used for content enrichment and indexing.

There will be a brief discussion of the general principles of graph databases, RDF triple stores, and the SPARQL query language. This technical segment will discuss the pros and cons of accessing remote server endpoints versus cached copies of external Linked Data resources, as well as the challenge of providing high-performance full text search against graph databases.

The webinar will conclude by providing a demonstration of Linked Canvas to illustrate various end-user experiences that can be created with Linked Data technology: faceted search across data collections; pinch and zoom navigation inside images; the exploration of concepts and ideas associated with specific points of interest; the discovery of conceptually related images; and the creation of guided tours with embedded audio-visual commentary.


Portrait: Dave Clarke

Dave Clarke is co-founder and CEO of the Synaptica® group of companies, providers of enterprise software solutions for knowledge organization and discovery. He served on the authoring committee responsible for the 2005 version of the US national standard for controlled vocabularies, ANSI/NISO Z39.19. Dave leads research and development at Synaptica, where is developing an extensive range of software solutions for ontology management, image management, Linked Data management, and text analytics. He is actively involved in educational outreach programs including LD4PE, the Linked Data for Professional Education initiative of DCMI. Synaptica software solutions have attracted numerous international awards including: Knowledge Management World magazine’s 100 Companies that Matter in KM and Trend Setting Product of the Year (multiple awards between 2011 and 2017). In 2016 Clarke was awarded the Knowledge Management Leadership Award by the Global Knowledge Management Congress.  Dave is a Fellow of the Royal Society of Arts, London, and an Associate of St. George’s House, Windsor Castle. He is currently researching the impact of personalized search and social media on social polarization and post-truth politics.

Categories: Linked Data | search and discovery applications | SKOS |
DCMI Terms | large-scale subject taxonomies | SPARQL
Webinar Type: Overview & Technical Briefing

Dave Clarke Webinar Cover

Access: Presentation file

DCMI/ASIS&T Joint Webinar:

Me4MAP: A method for the development of metadata application profiles

Webinar in English
Wednesday, 24 May 2017 10:00 AM – 11:15 AM EDT (UTC 14:00:00 – World Clock:

Me4MAP: Um método para o desenvolvimento de perfis de aplicação de metadados

Webinar em Português
Quarta-feira, 31 de Maio de 2017 10:00 AM – 11:15 AM EDT (UTC 14:00:00 – World Clock:

Abstract Resumo

A metadata application profile (MAP) is a construct that provides a semantic model for enhancing interoperability when publishing to the Web of Data. With a MAP, each property is defined as an RDF vocabulary term with the definition of domain, range, and cardinality. According to the DCMI document "Interoperability Levels for Dublin Core Metadata", a MAP is a construct that enhances semantic interoperability. Therefore, when a community of practice agrees to follow a MAP's set of rules for publishing data as Linked Open Data, it makes it possible for such data published the LOD cloud to be processed automatically by software agents.

A MAP is therefore a construct of great importance and the existence of a method for its development is essential to give MAP developers a common ground on which to work. The absence of such a method leads to a non-systematic set of MAP development activities that might result in MAPs with less quality.

This Webinar will present Me4MAP, a method for the development of metadata application profiles. Me4MAP was defined in the context of a PhD and is still now being tested and refined. It is a method that is approached through a software engineering perspective. With Me4MAP, we do not propose a universal solution; rather, our intention is to establish a starting point for the study and design of methods for the development of MAPs.

Um perfil de aplicação de metadados (MAP) é um constructo que fornece um modelo semântico para publicação de dados na Web de Dados. Este modelo semântico não é mais do que um modelo de dados com a definição de propriedades e restrições às propriedades. Cada propriedade é apresentada com um termo de um vocabulário RDF associado, com a definição do domínio e contra-domínio e ainda a sua cardinalidade. De acordo com o documento da DCMI “Os níveis de interoperabilidade para os metadados Dublin Core” um MAP é um constructo que potencia a interoperabilidade semântica dos dados. O que isto realmente significa é que quando uma comunidade de prática decide pôr-se de acordo em seguir um MAP, isto é, em seguir um conjunto de regras para publicar os seus dados como Linked Open Data, isso permite que os dados publicados na LOD cloud sejam processados automaticamente por agentes de software.

Um MAP é portanto um constructo de muita importância, e por isso é essencial a existência de um método para o seu desenvolvimento. É muito importante fornecer aos modeladores de MAPs uma base comum de entendimento para que o desenvolvimento de um MAP deixe de ser um conjunto não sistemático de actividades e passe a ser algo mais organizado de forma a resultar em MAPs de melhor qualidade.

Este Webinar apresenta o Me4MAP, um método para o desenvolvimento de perfis de aplicação de metadados. O Me4MAP é uma proposta que foi desenvolvida no âmbito de um projecto de doutoramento e que está ainda a ser testada e aperfeiçoada. É uma proposta que parte de uma perspectiva de engenharia de software. E, acima de tudo, é um um ponto de partida para o estudo e o desenho de métodos para o desenvolvimento de MAPs.

Presenter Background Background do Apresentador
Mariana Curado Malta Mariana Curado Malta is an Associate Professor at Polythecnic of Oporto, Portugal and a researcher in CEOS.PP, Portugal. Currently, she is on leave in the Laboratorio de Inovación en Humanidades Digitales in the Universidad Nacional de Educación a Distancia in Madrid, Spain. Her research work is framed in a project (POSTDATA), financed by a European Research Council Starting Grant that aims to publish poetry data (and related data) as Linked Open Data. In POSTDATA she is responsible for the semantic modelling. Her research interests are related to methods for the development of metadata application profiles and the quality of MAPs in particular and metadata in general. She is the co-author of Me4MAP, a method for the development of metadata application profiles. Mariana Curado Malta has a PhD in Technologies and Information Systems from University of Minho, Portugal. She is the author of several research papers, book chapters, and recently co-edited the book “Developing metadata application profiles” published by IGI Global. For more information see Mariana Curado Malta é professora adjunta no Politécnico do Porto, Portugal e investigadora do CEOS.PP. Neste momento encontra-se com uma bolsa de equiparada a bolseira no Laboratório de Inovação em Humanidades Digitais na Universidade Nacional de Educação à distância em Madrid, Espanha. O seu trabalho de investigação presente enquadra-se no projecto POSTDATA, um projecto financiado por uma bolsa ERC Starting Grant que tem como objectivo a publicação de dados de poesia (em todas as suas vertentes) em Linked Open Data. Mariana Curado Malta é a responsável pelo desenvolvimento semântico do projecto. Os seus interesses de investigação estão relacionados com métodos de desenvolvimento de perfis de aplicação, a qualidade dos MAPs em particular e dos metadados em geral, sendo co-autora do Me4MAP, um método para o desenvolvimento de perfis de aplicação de metadados. Mariana Curado Malta tem um doutoramento em Tecnologias e Sistemas de Informação pela Universidade do Minho e é autora de vários artigos científicos e capítulos de livros, tendo co-editado recentemente o livro “Developing metadata application profiles” na etiqueta IGI Global. Para mais informações ver

Mariana Curado Malta Webinar Cover

Access: Presentation file


Mariana Curado Malta Webinar Cover

Access: Presentation file

DCMI/ASIS&T Joint Webinar:

Nailing Jello to a Wall: Metrics, Frameworks, & Existing Work for Metadata Assessment

Webinar Date: Thursday, 27 April 2017, 10:00am-11:15am EDT (UTC 14:00:00 - World Clock:

Abstract: With the increasing number of repositories, standards and resources we manage for digital libraries, there is a growing need to assess, validate and analyze our metadata - beyond our traditional approaches such as writing XSD or generating CSVs for manual review. Being able to further analyze and determine measures of metadata quality helps us better manage our data and data-driven development, particularly with the shift to Linked Open Data leading many institutions to large-scale migrations. Yet, the semantically-rich metadata desired by many Cultural Heritage Institutions, and the granular expectations of some of our data models, makes performing assessment, much less going on to determine quality or performing validation, that much trickier. How do we handle analysis of the rich understandings we have built into our Cultural Heritage Institutions' metadata and enable ourselves to perform this analysis with the systems and resources we have?

This webinar sets up this question and proposes some guidelines, best practices, tools and workflows around the evaluation of metadata used by and for digital libraries and Cultural Heritage Institution repositories. What metrics have other researchers or practitioners applied to measure their definition of quality? How do these metrics or definitions for quality compare across examples – from the large and aggregation-focused, like Europeana, to the relatively small and project-focused, like Cornell University Library's own SharedShelf instance? Do any metadata assessment frameworks exist, and how do they compare to the proposed approaches in core literature in this area, such as Thomas Bruce and Diane Hillmann's 2004 article, "The Continuum of Metadata Quality"? The Digital Library Federation Assessment Interest Group (or DLF AIG) has a Metadata Working Group that has been attempting to build a framework that can be used broadly for digital repository metadata assessment - the state of this work, and the issues it has raised, will be discussed in this webinar as well. Finally, how does one begin to approach this metadata assessment – what tools, applications, or efforts for performing assessment exist for common digital repository applications or data publication mechanisms?

This webinar hopes to provide some solutions to these questions within existing literature, work, and examples of metadata assessment happening 'on the ground'. The goal is for webinar participants to walk away prepared to handle their own metadata assessment needs by using the existing work outlined and being better aware of the open questions in this domain.


Portrait: Christina Harlow

Christina Harlow works on metadata operations for the Cornell University Library. This work involves building out data infrastructure, ETL (extract transform load) functions, and Linked Open Data usage in service of distributed metadata management for Cornell's library repositories and systems. 

Categories: metadata assessment | digital libraries | assessment metrics
Webinar Type: Overview

Christina Harlow Webinar Cover

Access: Presentation file

DCMI/ASIS&T Joint Webinar:

Webinar: Boas Práticas para Dados na Web: Desafios e Benefícios

Webinar em Português
Thursday, 30 March 2017 10:00 AM – 11:15 AM EDT (UTC 14:00:00 – World Clock:

Webinar: Data on the Web Best Practices: Challenges and Benefits

Webinar in English
Thursday, 6 April 2017 10:00 AM – 11:15 AM EDT (UTC 14:00:00 – World Clock:

Existe um interesse crescente na publicação e consumo de dados na Web. Organizações governamentais e não-governamentais já disponibilizam uma variedade de dados na Web, alguns abertos, outros com restrições de acesso, abrangendo diversos domínios como educação, economia, segurança, patrimônio cultural, eCommerce e dados científicos. Desenvolvedores, jornalistas e outras pessoas manipulam esses dados para criar visualizações e realizar análises de dados. A experiência neste tema revela que é necessário abordar várias questões importantes a fim de satisfazer os requisitos tanto dos publicadores como dos consumidores de dados.

Neste webinar discutiremos os principais desafios enfrentados pelos publicadores e consumidores de dados ao compartilharem dados na Web. Também introduziremos o conjunto de Boas Práticas, propostas pelo W3C, para enfrentar esses desafios. Finalmente, discutiremos os benefícios de envolver os publicadores de dados na utilização das Boas Práticas, bem como melhorar a forma que os conjuntos de dados são disponibilizados na Web.

There is a growing interest in the publication and consumption of data on the Web. Government and non-governmental organizations already provide a variety of data on the Web, some open, others with access restrictions, covering a variety of domains such as education, economics, ECommerce and scientific data. Developers, journalists, and others manipulate this data to create visualizations and perform data analysis. Experience in this area reveals that a number of important issues need to be addressed in order to meet the requirements of both publishers and data consumers.

In this webinar we will discuss the key challenges faced by publishers and data consumers when sharing data on the Web. We will also introduce the W3C Best Practices set ( to address these challenges. Finally, we will discuss the benefits of engaging data publishers in the use of Best Practices, as well as improving the way data sets are made available on the Web.

Background do Apresentador Presenter Background
Bernadette Farias Lóscio Bernadette Farias Lóscio possui Doutorado em Ciência da Computação pela Universidade Federal de Pernambuco, Brasil. Atuou como Professora Adjunto na Universidade Federal do Ceará e fez pós-doutorado na Universidade de Manchester. Desde 2010, é Professora Adjunto da Universidade Federal de Pernambuco, Brasil. Nos últimos anos, prestou consultoria para o W3C Brasil e para a Prefeitura da Cidade de Recife em projetos na área de Dados Abertos. É uma das editoras do Data on the Web Best Practices, uma recomendação do World Wide Web Consortium (W3C), que oferece boas práticas relacionadas à publicação e ao uso dos dados na Web. Tem como principais áreas de interesse: Dados na Web, Dados Abertos, Web Semântica, Integração de Dados, Web das Coisas e Big Data. Bernadette Farias Lóscio received her Ph.D. in Computer Science from Federal University of Pernambuco, Brazil. She served as an Associate Professor at Federal University of Ceará, Brazil and as a Postdoctoral fellow at University of Manchester. Since 2010, she is an Associate Professor at the Federal University of Pernambuco, Brazil. She has worked as a consultant to W3C Brazil and to the Recife municipal government on Open Data Projects. She is also one of editors of the Data on the Web Best Practices, a recommendation of the World Wide Web Consortium (W3C), that provides Best Practices related to the publication and usage of data on the Web. Her main research interests include: Data on the Web, Open Data, Semantic Web, Data Integration, Web of Things and Big Data.
Caroline Burle Caroline Burle dos Santos Guimarães é responsável pelas Relações Institucionais do Centro de Estudos sobre Tecnologias Web ( e do W3C Brasil. É especialista em Negociação pela Fundação Getúlio Vargas e Mestre em Relações Internacionais pelo San Tiago Dantas. É integrante do Núcleo de Estudos e Análises Internacionais e Fellow do Programa da OEA de Governo Aberto nas Américas. É uma das editoras do documento do W3C Data on the Web Best Practices. Pesquisa sobre governo aberto, relações internacionais e política externa, com experiência na atuação de governos subnacionais, na área de Web, dados abertos e governança da Internet. Caroline Burle dos Santos Guimarães is responsible for the Institutional Relations, W3C Brazil Office and the Web Technologies Study Center ( at the Brazilian Network Information Center (Brazil) and the Brazilian Internet Steering Committee ( She is a Fellow of the OAS Fellowship on Open Government in the Americas. She has a Master degree on Open Government from UNESP - San Tiago Dantas and is specialized in negotiation by Fundação Getúlio Vargas. She has a degree in International Relations from Fundação Armando Álvares Penteado. Caroline leads many projects focused on open data and open government. She is one of the editors of the document Data on the Web Best Practices, from W3C DWBP WG. She also researches about the Web, open data, open government, Internet governance and foreign policy.
Newton Calegari Newton Calegari possui bacharelado em Ciência da Computação e mestrado em Tecnologias da Inteligência e Design Digital (TIDD) Pontifícia Universidade Católica de São Paulo (PUC-SP). É líder de projetos no Centro de Estudos sobre Tecnologias Web do ( e no escritório do W3C Brasil. É um dos editores da recomendação W3C Data on the Web Best Practices. Pesquisa e atua na área de Web Semântica, Open Web Platform, padronização e tecnologias emergentes para Web. Newton Calegari has a bachelor's degree in Computer Sciences and a Master's of Science degree in Technologies of Intelligence and Digital Design, both from PUC-SP. He is a researcher at the Web Technologies Study Center ( and at the W3C Brazil Office. He is also one of the editors of the W3C Data on the Web Best Practices recommendation. He researches about Semantic Web, Open Web Platform, standards and emerging web technologies.

| **Portuguese:** [Lóscio, Burle and Calegari Presentation Portuguese Cover Image](/resources/training/ASIST-Webinar-20170330/DWBP_webinar_PT.pdf) Access: [Diapositivas de la presentación](/resources/training/ASIST-Webinar-20170330/DWBP_webinar_PT.pdf) | **English:** [Lóscio, Burle and Calegari Presentation English Cover Image](/resources/training/ASIST-Webinar-20170330/DWBP_webinar_EN.pdf) Access: [Presentation Slides](/resources/training/ASIST-Webinar-20170330/DWBP_webinar_EN.pdf) | ****

DCMI/ASIS&T Joint Webinar:

From MARC silos to Linked Data silos? Data models for bibliographic Linked Data

Webinar Date: Tuesday, 28 February 2017, 10:00am-11:15am EST (UTC 15:00 - World Clock:

Abstract: Many libraries are experimenting with publishing their metadata as Linked Data to open up bibliographic silos, usually based on MARC records, to the Web. The libraries who have published Linked Data have all used different data models for structuring their bibliographic data. Some are using a FRBR-based model where Works, Expressions and Manifestations are represented separately. Others have chosen basic Dublin Core, dumbing down their data into a lowest common denominator format. The proliferation of data models limits the reusability of bibliographic data. In effect, libraries have moved from MARC silos to Linked Data silos of incompatible data models. There is currently no universal model for how to represent bibliographic metadata as Linked Data, even though many attempts for such a model have been made.

In this webinar, you’ll see:

  • a survey of published bibliographic Linked Data, the data models proposed for representing bibliographic data as RDF, and tools used for conversion from MARC records
  • an analysis of different use cases for bibliographic Linked Data and how they affect the data model
  • recommendations for choosing a data model

We also present efforts at the National Library of Finland to open up our bibliographic metadata, including the national bibliography Fennica, the national discography Viola and the article database Arto, as Linked Data while trying to learn from the examples of others. We are setting up a conversion process from MARC records to BIBFRAME and compliant RDF, which we are going to publish as Linked Data using various technologies including a SPARQL endpoint, HDT compressed RDF dumps and a Linked Data Fragments API.

This webinar is an extended, in-depth version of the SWIB16 conference presentation ”From MARC silos to Linked Data silos?”

Minimum Participant Experience Level: Basic familiarity of bibliographic metadata and Linked Data assumed


Portrait: Osma Suominen

Osma Suominen works as an information systems specialist at the National Library of Finland. His current activities are centered around the publishing of bibliographic data, including the Finnish national bibliography Fennica, as Linked Data. He is also one of the creators of the thesaurus and ontology service and is leading development of the Skosmos vocabulary browser used in Finto. Osma Suominen earned his doctoral degree at Aalto University while doing research on semantic portals and quality of controlled vocabularies within the FinnONTO series of projects. His past accomplishments include the Skosify vocabulary analysis and quality improvement tool, and, the Linked Data service of Aalto University.

Categories: Linked Data | MARC | bibliographic data |
Webinar Type: Overview

Osma Suominen Cover Image

Access: Presentation file

DCMI/ASIS&T Joint Webinar:

Modelado y publicación de los vocabularios controlados del proyecto UNESKOS

Webinar Date: Wednesday, 18 May 2016, 10:00am-11:15am EST

(Modeling and Publishing of Controlled Vocabularies for the UNESKOS Project)
Note: This webinar will be delivered in Spanish.

Resumen: Se presentan los procesos de modelado y publicación de los vocabularios del proyecto UNESKOS aplicando tecnologías de la Web Semántica. Más específicamente, los vocabularios representados son el Tesauro de la UNESCO y la Nomenclatura de Ciencia y Tecnología. Ambos vocabularios están publicados como conjuntos de datos RDF con una estructura para facilitar su consulta y reutilización según los principios Linked Open Data. También se muestra como se ha aplicado la norma ISO-25964 para representar el tesauro de la UNESCO utilizando conjuntamente SKOS y la ontología ISO-THES. Asímismo se analizarán las soluciones tecnológicas empleadas para el proceso de publicación y consulta de ambos vocabularios. Abstract: This webinar presents the modeling and publishing process of the vocabularies for the UNESKOS project by applying Semantic Web technologies. More specifically, the vocabularies represented are the UNESCO Thesaurus and the Nomenclature for fields of Science and Technology. Both vocabularies are published as RDF datasets with a structure that allows its query and reuse according to the principles of Linked Open Data. The webinar will demonstrate the application of ISO-25964 standard to represent the UNESCO thesaurus using SKOS and the ISO-THES ontology. Technological solutions used for the project will also be discussed.
Juan Sánchez Presentador: Juan Antonio Pastor Sánchez es doctor en Documentación por la Universidad de Murcia y Profesor en la misma Universidad, dentro del Departamento de Información y Documentación. Su actividad docente y de investigación se desarrolla en el área de tecnologías de la información y la comunicación, más concretamente sobre teoría y modelos de hipertexto, automatización en la gestión de tesauros, modelos de gestión de información y organización del conocimiento, técnicas y especificaciones para la Web Semántica, así como accesbilidad, usabilidad y arquitectura de la información para la Web. Presenter: Juan Antonio Pastor Sánchez is an Associate Professor in the Department of Information and Documentation at the University of Murcia in Spain with specializations in Library Science and Documentation. He holds a PhD from the University of Murcia. He performs research in the areas of theory and models of hypertext, automation in thesaurus management, technology models for the management of information and knowledge organization, techniques and specifications for the Semantic Web, and accessibility, usability and information architecture for the Web.


Juan Sanchez Presentation English Cover Image

Access: Presentation Slides


Juan Sanchez Presentation English Cover Image

Access: Diapositivas de la presentación

DCMI/ASIS&T Joint Webinar:

SKOS in Two Parts: Generic Tools and Methods for SKOS-based Concept Schemes

The series is in partnership with AIMS: Agricultural Information Management Standards

Webinar 1 Date: Wednesday, 16 March 2016, 10:00am-11:15am EDT (UTC 14:00 - World Clock:
Webinar 2 Date: Wednesday, 6 April 2016, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Series Abstract:In the past seven years, SKOS has become a widely recognized and used common interchange format for thesauri, classifications, and other types of vocabularies. This has opened a huge opportunity for the development of generic tools and methods that should apply to all vocabularies that can be expressed in SKOS. While expensive, proprietary or custom-developed solutions aimed at one particular thesaurus or classification have been dominant, now more and more open source tools are being created to deal with various aspects of vocabulary management. In this series of two webinars with Joachim Neubert (ZBW Leibniz Information Centre for Economics, Germany) and Osma Suominen (National Library of Finland), we start on 16 March 2016 with Webinar 1 by examining skos-history, a method and toolset to nail down changes in a vocabulary. We follow with Webinar 2 on 6 April 2016 focusing on Skosmos, a full-fledged web application for publishing SKOS vocabularies.

  <h3><em>Webinar 1: Change Tracking in Knowledge Organization Systems with skos-history</em></h3>
    <p>With Joachim Neubert (ZBW Leibniz Information Centre for Economics, Germany) and Osma Suominen (National Library of Finland)</p>

    <p><strong>Webinar 1 Abstract:</strong> When a new version of a vocabulary is published, users want to know "What’s new?" and "What has changed?" Vocabulary managers had differing strategies to answer these questions—relying on internal logs of the vocabulary management system or the intellectual collection of changes deemed relevant. These methods generally are not available to third parties using a vocabulary, or for example are trying to keep vocabulary mappings up to date.</p>

    <p>Having vocabularies published in SKOS as RDF triples has changed this situation: Vocabularies can be compared algorithmically, and deltas between versions can be computed. This data can be loaded into a version store, and evaluated by SPARQL queries. Therefore, the published versions alone are sufficient to get the differences.</p>

    <p>The webinar will explain how you can create a version store, how skos-history interlinks versions and deltas, and how queries can get a grip on added or removed concepts, on changed notations, or on merges and splits of concepts. We will show how aggregated change information about a concept scheme can be obtained, and how the complete change history of a single concept across multiple versions can be traced. Finally, you will learn how you can adapt skos-history queries to the features of a particular concept scheme in which you are interested.</p>

    <p><strong>Skill level:</strong> Intermediate. A working knowledge of SKOS and a basic knowledge of triple stores and SPARQL queries are presumed.
  <a name="2016skos2" id="2016skos2"></a> 

  <h3><em>Webinar 2: Publishing SKOS concept schemes with Skosmos</em></h3>
    <p>With Osma Suominen (National Library of Finland)</p>

    <p><strong>Webinar 2 Abstract:</strong> With more and more thesauri, classifications and other knowledge organization systems being published as Linked Data using SKOS, the question arises how best to make them available on the web. While just publishing the Linked Data triples is possible using a number of RDF publishing tools, those tools are not very well suited for SKOS data, because they cannot support term-based searching and lookup.</p>

    <p>This webinar presents Skosmos, an open source web-based SKOS vocabulary browser that uses a SPARQL endpoint as its back-end. It can be used by e.g. libraries and archives as a publishing platform for controlled vocabularies such as thesauri, lightweight ontologies, classifications and authority files. The Finnish national thesaurus and ontology service Finto, operated by the National Library of Finland, is built using Skosmos.</p>

    <p>Skosmos provides a multilingual user interface for browsing and searching the data and for visualizing concept hierarchies. The user interface has been developed by analyzing the results of repeated usability tests. All of the SKOS data is made available as Linked Data. A developer-friendly REST API is also available providing access for using vocabularies in other applications such as annotation systems.</p>

    <p>We will describe what kind of infrastructure is necessary for Skosmos and how to set it up for your own SKOS data. We will also present examples where Skosmos is being used around the world.</p>

    <p><strong>Skill level:</strong> Intermediate. A working knowledge of SKOS and a basic knowledge of triple stores and SPARQL queries are presumed.</p>


Joachim Neubert is a scientific software developer at the ZBW Leibniz Information Centre for Economics ( He published the STW Thesaurus for economics ( and several other datasets as Linked Open Data. In 2009, he started the SWIB – Semantic Web for Libraries conference and serves to date as co-chair of its programme committee. As an "invited expert", he took an active part in the Library Linked Data Incubator Group of the World Wide Web Consortium (W3C). His research interests include knowledge organization systems and authorities, linked data, and web-based information systems and applications, on which he reports once in a while on ZBW Labs (

Osma Suominen is currently working as information systems specialist at the National Library of Finland. He is involved in publishing library data as Linked Data, maintaining the thesaurus and ontology service, and leading development of the Skosmos vocabulary browser used in Finto. He is currently also assisting FAO (UN), CABI (UK), and NAL (US) in creating a Global Agricultural Concept Scheme by merging their existing thesauri, using Linked Data tools and approaches. Osma Suominen earned his doctoral degree at Aalto University while doing research on semantic portals and quality of controlled vocabularies within the FinnONTO series of projects. His past accomplishments include the Skosify vocabulary analysis and quality improvement tool, and, the Linked Data service of Aalto University.

Categories: SKOS | Simple Knowledge Organization System | change management
Webinar Type: Innovative practices

Webinar 1: Change Tracking in Knowledge Organization Systems with skos-history

Joachim Neubert & Osma Suominen

Access: Presentation Slides

Webinar 2: Publishing SKOS Concept Schemes with Skosmos

Video Recording:

DCMI/ASIS&T Joint Webinar:

Linked Data Fragments: Querying multiple Linked Data sources on the Web

Webinar Date: Wednesday, 17 February 2016, 10:00am-11:15am EST (UTC 15:00 - World Clock:

Abstract: The dream of Linked Data: if we just get our data online, the promised Semantic Web will eventually rise. Everybody will be able to query our data with minimal effort. We will be able to integrate data from multiple sources on the fly. Everything will just work and data will flow freely ever after.

Well, that hasn't really happen yet.

Even though we published billions of triples on the Web, there are few places that reliably let us execute queries over them. Integration is still very limited. When will our efforts ever pay off?

This webinar introduces you to the Linked Data Fragments family of technologies, which take a much more pragmatic view on the Web of Data. Whereas one of the main problems with the Semantic Web is currently the high publication cost of data (with unknown return), Linked Data Fragments proposes to shift the complexity of querying from the server to the client. This makes publishing Linked Data affordable, and realistic on the Web.

You might have heard about Linked Data Fragments already, or you might just be curious about scalable Linked Data publishing or querying on the Web. This webinar by Ruben Verborgh, creator and lead researcher of Linked Data fragments, is the perfect introduction to understand the important principles and vast potential of what this technology has to offer.

In this webinar, you'll see:

  • what Linked Data Fragments is and what it means for you
  • how to execute queries over multiple Linked Data sources live on the Web
  • how to publish your Linked Data at low cost, so others can query it

Minimum Participant Experience Level: Fundamental awareness of Linked Data required; novice level experience recommended


Portrait: Ruben Verborgh

Ruben Verborgh
is a researcher in semantic hypermedia at Ghent University – iMinds, Belgium and a postdoctoral fellow of the Research Foundation Flanders. He explores the connection between Semantic Web technologies and the Web's architectural properties, with the ultimate goal of building more intelligent clients. Along the way, he became fascinated by Linked Data, REST/hypermedia, Web APIs, and related technologies. He's a co-author of two books on Linked Data, and has contributed to more than 140 publications on Web-related topics for international conferences and journals.

Categories: Linked Data | Linked Data Fragments | Web querying |
availability | scalability | SPARQL | publication
Webinar Type: Overview & Technical Briefing

Verborgh Webinar

Access: Presentation Slides

DCMI/ASIS&T Joint Webinar:

Creating Content Intelligence: Harmonized Taxonomy & Metadata in the Enterprise Context

Webinar Date: Wednesday, 27 January 2016, 10:00am-11:15am EST (UTC 15:00 - World Clock:

Abstract: Many organizations have content dispersed across multiple independent repositories, often with a real lack of metadata consistency. The attention given to enterprise data is often not extended to unstructured content, widening the gap between the two worlds and making it near impossible to provide accurate business intelligence, good user experience, or even basic findability.

How do you bring all those disparate efforts together to create content intelligence across the organization? This webinar will describe the benefits and challenges in developing metadata and taxonomy across multiple functional areas, creating a unified Enterprise Content Architecture (ECA).

Hear about real enterprise metadata & taxonomy harmonization projects in different contexts, including a greeting card company, a media company, an automotive manufacturer and a consumer food manufacturer. See how they worked to harmonize across a number of diverse systems that supported multiple functions, from creative processes to manufacturing to reporting.

In this webinar, you will learn:

  • how the concept of Enterprise Content Architecture unifies multiple disciplines, including information management, data management and content strategy;
  • the difference and similarities between master data and business metadata;
  • how enterprise-level metadata and taxonomy helps drive semantic interoperability and improve business processes;
  • how taxonomy can be harmonized across diverse systems and provided as a service; and
  • how to build support and governance for enterprise-level attention to taxonomy and metadata from within a project

Participant Experience Level: Basic familiarity with taxonomy and metadata assumed.


Portrait: Stephanie Lemieux

Stephanie Lemieux
is the president and primary consultant at Dovecot Studio. She is a passionate advocate of taxonomy, search and other marvelous pursuits in content organization. She has worked with organizations in various industries, such as Nickelodeon, General Mills, UPS, and the United Nations. Prior to focusing her energies on Dovecot Studio, she was a senior consultant and taxonomy practice lead with Earley & Associates. She speaks, blogs and writes whenever she can to help spread the good taxonomy word. Stephanie has a Masters degree in Library and Information Studies (MLIS) from McGill University with a specialization in knowledge management.

Categories: Enterprise Content Architecture (ECA) | data management |
content strategy | enterprise-level taxonomy & metadata
Webinar Type: Overview & Technical Briefing

Free | Free | US$15

After registering you will receive a confirmation email containing information about joining the Webinar. After the webinar broadcast, you will have unlimited access to the recorded presentation.

If you are not already a DCMI member, join now and get the webinar for free. Please note that processing of your membership application takes a minimum of 48 hours.

View Webinar System Requirements

DCMI/ASIS&T Joint Webinars: in Two Parts: From Use to Extension

Series Abstract: When it was first introduced in 2011 was seen by many as a grab, by Google and other search engines, for the semantic web landscape, or as something only of interest to the SEO community wanting their products displayed more prominently in search results. It was therefore somewhat of a surprise to the library community when, less than a year later, the global library cooperative OCLC introduced structured data markup into the pages for the 300 million plus resources on

Things have changed significantly since those early days. structured data is now published on over 10 million web domains; the vocabulary has expanded to include over 600 Types and nearly 1,000 Properties; it’s core capability for describing bibliographic resources has been greatly extended. There is now a specific bibliographic extension — and, implementations and discussions are becoming common in the library community.

Join Independent consultant Richard Wallis, former Technology Evangelist for OCLC, currently working with Google on, for this two part, in-depth mini-series look at, its use, and extension in the bibliographic domain and beyond.

Part 1: Fit For a Bibliographic Purpose

Date/Time: Wednesday, 18 November 2015 at 10:00am-11:15am EST (UTC 15:00:00 - World Clock:

Abstract: In this first webinar in the series, Independent Consultant, Richard Wallis, traces the history of the vocabulary, plus its applicability to the bibliographic domain. He will share the background to, and activities of, the Schema Bib Extend W3C Community Group he chairs; why it was set up; how it approached the creation of bibliographic extension proposals; and how those proposals were shaped. He will then review the current status of the vocabulary and the recent introduction of the and extensions. Although Richard will be using bibliographic examples, the content of this webinar will be of interest and relevance to those in other domains, and/or considering other extensions.

Part 2: Extending Potential and Possibilities

Date/Time: Wednesday, 2 December 2015, 10:00am-11:15am EST (UTC 15:00:00 - World Clock:

Abstract: In this second more technical webinar in the series, Independent Consultant, Richard Wallis, explains the extension mechanism, for external and reviewed/hosted extensions, and their relationship to the core vocabulary. He will take an in-depth look at, demonstrate, and share experiences in designing, and creating a potential extension to the vocabulary. He will step through the process of creating the required vocabulary definition and examples files on a local system using a few simple tools then sharing them on a publicly visible temporary cloud instance before proposing to the group.


Portrait: Richard Wallis

Richard Wallis Independent Consultant, is a distinguished thought leader in Linked Data and Semantic Web who has been at the forefront of the emergence of these technologies for over 20 years. He is Chair of the Schema Bib Extend, and Schema Architypes, W3C Community Groups and evangelist for the adoption of Linked Data in cultural heritage and the wider Web. He has an international reputation for insightful and entertaining keynote sessions at library, Web, and Semantic Web focused events. Currently working with OCLC, Google, and the banking industry on the extension, application and use of the vocabulary; he is a pragmatist who believes in searching for implementable solutions.

Categories: | extension | bibliographic metadata
Webinar Type: Innovative practices

Wallis Webinar #1

Access: Presentation Slides
Access: Video Recording

Wallis Webinar #2

Access: Presentation Slides
Access: Video Recording

DCMI/ASIS&T Joint Webinar:

Implementing Linked Data in Low-Resource Conditions

Webinar Date: Wednesday, 17 June 2015 9 September 2015, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Abstract: Opening up and linking data is becoming a priority for many data producers because of institutional requirements, or to consume data in newer applications, or simply to keep pace with current development. Since 2014, this priority has gaining momentum with the Global Open Data in Agriculture and Nutrition initiative (GODAN). However, typical small and medium-size institutions have to deal with constrained resources, which often hamper their possibilities for making their data publicly available. This webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data World.

Keizer and Caracciolo will provide an overview of bottlenecks that institutions typically face when entering the world of open and linked data, and will provide recommendations on how to proceed. They will also discuss the use of standard and linked vocabularies to produce linked data, especially in the area of agriculture. They will describe AGRISAs, a web-based resource linking agricultural datasets as an example of linked data application resulting from the collaboration of small institutions. They will also mention AgriDrupal, a Drupal distribution that supports the production and consumption of linked datasets.

Redux: An update of a webinar first presented in 2013.


Portrait: Johannes Keizer

Johannes Keizer
has worked for the Food and Agriculture Organization of the UN since 1998, primarily as head of the FAO documentation group. The bibliographic database AGRIS and the multilingual concept scheme AGROVOC were completely remodeled under his leadership. In the Office of Knowledge Exchange, Research and Extensions, he heads a staff of 20—the AIMS (Agricultural Information Management Standards and Services) team which provides standards, tools, and advice for FAO stakeholders. The AIMS Team provides the technical backbone for the global Coherence in Information for Agricultural Research for Development (CIARD) Initiative. Through EC framework projects such as NeON, D2Science, and agINFRA, the AIMS Team has channeled the results of innovative European research into the international work of FAO to combat hunger and poverty in the world.

Portrait: Caterina Caracciolo

Caterina Caracciolo
Caterina Caracciolo, PhD, has served as an Information Specialist at the Food and Agriculture Organization of the United Nations (FAO) since 2006. Currently, she is responsible for the AGROVOC Concept Scheme, and participates in the GACS Working Group and the Wheat Data Interoperability Working Group (RDA). Her main interest lay in the area of semantics for data integration and sharing, with a special focus on data specific to the domains of agriculture, biodiversity, natural science and environment in the broad sense. She regularly serves on program committees for international conferences and publishes in conference proceedings and journals in the area of semantic web and information sharing in agriculture and biodiversity. She has worked in various EC-funded projects and also served as Work Package leader in the NeOn and SemaGrow projects.

Categories: open data | developing countries | AGRISAs |
linking datasets | publishing/curating data |
low-resource conditions
Webinar Type: Innovative practices

Keizer and Caracciolo webinar

Access: Presentation Slides

DCMI/ASIS&T Joint Webinar:

OpenAIRE Guidelines: Promoting Repositories Interoperability and Supporting Open Access Funder Mandates

Webinar Date: Wednesday, 1 July 2015, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Abstract: The OpenAIRE Guidelines for Data Source Managers provide recommendations and best practices for encoding of bibliographic information in OAI metadata. They have adopted established standards for different classes of content providers: (1) Dublin Core for textual publications in institutional and thematic repositories; (2) DataCite Metadata Kernel for research data repositories; and (3) CERIF-XML for Current Research Information Systems.

The principle of these guidelines is to improve interoperability of bibliographic information exchange between repositories, e-journals, CRIS and research infrastructures. They are a means to help content providers to comply with funders Open Access policies, e.g. the European Commission Open Access mandate in Horizon2020, and to standardize the syntax and semantics of funder/project information, open access status, links between publications and datasets. The presenters will provide an overview of the guidelines, implementation support in major platforms and tools for validation.


Portrait: Pedro Príncipe

Pedro Príncipe is an information specialist at University of Minho Documentation Services (Portugal) on the Open Access Projects Office. He has worked since 2010 in the OpenAIRE projects and infrastructure, in support, helpdesk and dissemination activities. He is member of the OpenAIRE guidelines team and co-author of the OpenAIRE guidelines for data source managers.

Portrait: Jochen Schirrwagen

Jochen Schirrwagen is research fellow at Bielefeld University Library, Germany. He has worked since 2008 in the knowledge infrastructure projects DRIVER and OpenAIRE in the fields of metadata management, aggregation and contextualization. He is co-author of the OpenAIRE guidelines for data source managers and coordinates its further evolvement.

Categories: OpenAIRE | bibliographic information exchange
Webinar Type: Overview

Principe and Schirrwagen webinar

Access: Presentation Slides

DCMI/ASIS&T Joint Webinar:

Digital Preservation Metadata and Improvements to PREMIS in Version 3.0

Webinar Date: Wednesday, 27 May 2015, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Abstract: The PREMIS Data Dictionary for Preservation Metadata is the international standard for metadata to support the preservation of digital objects and ensure their long-term usability. Developed by an international team of experts, PREMIS is implemented in digital preservation projects around the world, and support for PREMIS is incorporated into a number of commercial and open-source digital preservation tools and systems. The PREMIS Editorial Committee coordinates revisions and implementation of the standard, which consists of the Data Dictionary, an XML schema, and supporting documentation.

The PREMIS Data Dictionary is currently in version 2.2. A new major release 3.0 is due out this summer. This webinar gives a brief overview of why digital preservation metadata is needed, shows examples of digital preservation metadata, shows how PREMIS can be used to capture this metadata, and illustrates some of the changes that will be available in version 3.0.


Portrait: Angela Dappert

Angela Dappert
Dr. Angela Dappert is Senior Research Fellow at the University of Portsmouth. She has widely researched and published on digital preservation. She has consulted for archives and libraries on digital life cycle management and policies, led and conducted research in the EU-co-funded Planets, Scape, TIMBUS, and E-ARK projects, and applied digital preservation practice at the British Library through work on digital repository implementation, digital metadata standards, digital asset registration, digital asset ingest, preservation risk assessment, planning and characterization, and data carrier stabilization. Angela holds a Ph.D. in Digital Preservation, an M.Sc. in Medical Informatics and an M.Sc. in Computer Sciences. She serves on the PREMIS Editorial Committee and the Digital Preservation Programme Board of National Records Scotland.

Categories: PREMIS 3.0 | preservation metadata
Webinar Type: Standard update

Dappert webinar

Access: Presentation slides

DCMI/ASIS&T Joint Webinar:

From 0 to 60 on SPARQL queries in 50 minutes

Webinar Date: Wednesday, 13 May 2015, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Abstract: This webinar provides an introduction to SPARQL, a query language for RDF. Users will gain hands on experience crafting queries, starting simply, but evolving in complexity. These queries will focus on coinage data in the SPARQL endpoint hosted by numismatic concepts defined in a SKOS-based thesaurus and physical specimens from three major museum collections (American Numismatic Society, British Museum, and Münzkabinett of the Staatliche Museen zu Berlin) linked to these concepts. Results generated from these queries in the form of CSV may be imported directly into Google Fusion Tables for immediate visualization in the form of charts and maps.

This webinar was first presented as a training session in the LODLAM Training Day at SemTech2014.


Portrait: Ethan Gruber

Ethan Gruber
Ethan Gruber is the Web and Database Developer for the American Numismatic Society (ANS). With almost ten years of experience in digital humanities and cultural heritage Web development projects, Ethan is responsible for developing a new public interface for the society's collections of objects and archives. He is the chief architect of Numishare, an open-source framework for delivering coin collections online and various ANS projects which implement this software: Online Coins of the Roman Empire and Coin Hoards of the Roman Republic.

Categories: SPARQL | Resource Description Framework (RDF)
Webinar Type: Praxis

Gruber webinar

Watch Recording Now! (.wmv)
Access Presentation Slides

DCMI/ASIS&T Joint Webinar:

Approaches to Making Dynamic Data Citable: Recommendations of the RDA Working Group

Webinar Date: Wednesday, 8 April 2015, 10:00am-11:15am EDT (UTC 14:00 - World Clock:

Abstract: Being able to reliably and efficiently identify entire or subsets of data in large and dynamically growing or changing datasets constitutes a significant challenge for a range of research domains. In order to repeat an earlier study, to apply data from an earlier study to a new model, we need to be able to precisely identify the very subset of data used. While verbal descriptions of how the subset was created (e.g. by providing selected attribute ranges and time intervals) are hardly precise enough and do not support automated handling, keeping redundant copies of the data in question does not scale up to the big data settings encountered in many disciplines today. Furthermore, we need to be able to handle situations where new data gets added or existing data gets corrected or otherwise modified over time. Conventional approaches, such as assigning persistent identifiers to entire data sets or individual subsets or data items, are thus not sufficient.

In this webinar, Andreas Rauber will review the challenges identified above and discuss solutions that are currently elaborated within the context of the working group of the Research Data Alliance (RDA) on Data Citation: Making Dynamic Data Citeable. The approach is based on versioned and time-stamped data sources, with persistent identifiers being assigned to the time-stamped queries/expressions that are used for creating the subset of data. We will further review results from the first pilots evaluating the approach.


Portrait: Andreas Rauber

Andreas Rauber
Andreas Rauber is Associate Professor at the Department of Software Technology and Interactive Systems (IFS) at the Vienna University of Technology (TU-Wien). He furthermore is president of AARIT, the Austrian Association for Research in IT and a Key Researcher at Secure Business Austria (SBA-Research). He is co-chairing the RDA Working Group on Data Citation together with Ari Asmi and Dieter van Uytvanck.

He received his MSc and PhD in Computer Science from the Vienna University of Technology in 1997 and 2000, respectively. In 2001 he joined the National Research Council of Italy ([ CNR]) in Pisa as an ERCIM Research Fellow, followed by an ERCIM Research position at the French National Institute for Research in Computer Science and Control (INRIA), at Rocquencourt, France, in 2002. From 2004-2008 he was also head of the iSpaces] research group at the eCommerce Competence Center (ec3).


Categories: Research Data Alliance (RDA) | citable dynamic data
Webinar Type: Overview

Rauber webinar

Access: Presentation slides

DCMI/ASIS&T Joint Webinar:

VocBench 2.0: A Web Application for Collaborative Development of Multilingual Thesauri

Webinar Date: Wednesday, 4 March 2015, 10:00am-11:15am EST (UTC 15:00 - World Clock:

Abstract: VocBench is a web-based platform for the collaborative maintenance of multilingual thesauri. VocBench is an open source project, developed in the context of a collaboration between the Food and Agricultural Organization of the UN (FAO) and the University of Rome Tor Vergata. VocBench is currently used for the maintenance of AGROVOC, EUROVOC, GEMET, the thesaurus of the Italian Senate, the Unified Astronomy Thesaurus of Harvard University, as well as other thesauri.

VocBench has a strong focus on collaboration, supported by workflow management for content validation and publication. Dedicated user roles provide a clean separation of competencies, addressing different specificities ranging from management aspects to vertical competencies in content editing, such as conceptualization versus terminology editing. Extensive support for scheme management allows editors to fully exploit the possibilities of the SKOS model, as well as to fulfill its integrity constraints.

Since version 2, VocBench is open source software, open to a large community of users and institutions supporting its development with their feedback and ideas. During the webinar we will demonstrate the main features of VocBench from the point of view of users and system administrators, and explain in what way you may join the project.


Portrait: Caterina Caracciolo

Caterina Caracciolo
Caterina Caracciolo, PhD, has served as an Information Specialist at the Food and Agriculture Organization of the United Nations (FAO) since 2006. Currently, she is responsible for the AGROVOC Concept Scheme, and participates in the GACS Working Group and the Wheat Data Interoperability Working Group (RDA). Her main interest lay in the area of semantics for data integration and sharing, with a special focus on data specific to the domains of agriculture, biodiversity, natural science and environment in the broad sense. She regularly serves on program committees for international conferences and publishes in conference proceedings and journals in the area of semantic web and information sharing in agriculture and biodiversity. She has worked in various EC-funded projects and also served as Work Package leader in the NeOn and SemaGrow projects.

Portrait: Armando Stellato

Armando Stellato
Armando Stellato, PhD, is Researcher at the University of Rome Tor Vergata, where he carries on research and teaching in the fields of Knowledge Representation and Knowledge Based Systems. He is author of more than 70 publications on conferences and journals in the fields of Semantic Web, Natural Language Processing and related areas and has been member of the program committees of over 30 international scientific conferences and workshops. Currently his main interests cover Architecture Design for Knowledge Based Systems, Knowledge Acquisition and Onto-Linguistic interfaces, for which he participated to several EU funded projects, such as Crossmarc, Moses, Cuspis, Diligent, Neon, INSEARCH, SCIDIP-ES, AgInfra SemaGrow. Dr. Stellato is also consultant at the Food and Agriculture Organization (FAO) of the United Nations as Semantic Architect, working on all aspects related to maintenance and publication of FAO RDF vocabularies such as AGROVOC, Biotech and Journal Authority Data and on the development of VocBench, an Application for Collaborative Management of RDF Vocabularies.


Categories: VocBench | value vocabularies | maintenance of multilingual thesauri |
community-developed vocabularies | Simple Knowledge Organization System (SKOS)
Webinar Type: Praxis

Stellato & Caracciolo webinar

Access: Presentation slides

DCMI/ASIS&T Joint Webinar:

The Libhub Initiative: Increasing the Web Visibility of Libraries

Webinar Date: Wednesday, 7 January, 2015, 10:00am-11:15am EST (UTC 15:00 - World Clock:

Abstract: As a founding sponsor, Zepheira's introduction of the Libhub Initiative creates an industry-wide focus on the collective visibility of libraries and their resources on the Web. Libraries and memory organizations have rich content and resources that the Web can not see or use. The Libhub Initiative aims to find common ground for libraries, providers, and partners to publish and use data with non-proprietary, web standards. Libraries can then communicate in a way Web applications understand and Web users can see through the use of enabling technology like Linked Data and shared vocabularies such as and BIBFRAME. The Libhub Initiative uniquely prioritizes the linking of these newly exposed library resources to each other and to other resources across the Web, a critical requirement of increased Web visibility.

In this webinar, Eric will talk about the transition libraries must make to achieve Web visibility, explain recent trends that support these efforts, and introduce the Libhub Initiative — an active exploration of what can happen when libraries begin to speak the language of the Web.


Portrait: Eric Miller

Eric Miller
Eric Miller is the President of Zepheira. Prior to founding Zepheira, Eric led the Semantic Web Initiative for the World Wide Web Consortium (W3C) at MIT where he led the architectural and technical leadership in the design and evolution of the Semantic Web. Eric is a frequent and sought after international speaker in areas of International Web standards, knowledge management, collaboration, development and deployment.

Categories: Libhub Initiative | | BIBFRAME | library Web visibility
Webinar Type: Overview & Praxis

Miller webinar

Access: Presentation slides

2014 Webinars

DCMI/ASIS&T Joint Webinar:

The Learning Resource Metadata Initiative, describing learning resources with, and more?

Webinar Date: Wednesday, 19 November, 2014, 10:00am-11:15am EST (UTC 15:00 -- World Clock:

Abstract: The Learning Resource Metadata Initiative (LRMI) is a collaborative initiative that aims to make it easier for teachers and learners to find educational materials through major search engines and specialized resource discovery services. The approach taken by LRMI is to extend the ontology so that educationally significant characteristics and relationships can be expressed. In this webinar, Phil Barker and Lorna M. Campbell of Cetis will introduce and present the background to LRMI, its aims and objectives, and who is involved in achieving them. The webinar will outline the technical aspects of the LRMI specification, describe some example implementations and demonstrate how the discoverability of learning resources may be enhanced. Phil and Lorna will present the latest developments in LRMI implementation, drawing on an analysis of its use by a range of open educational resource repositories and aggregators, and will report on the potential of LRMI to enhance education search and discovery services. Whereas the development of LRMI has been inspired by, the webinar will also include discussion of whether LRMI has applications beyond those of


Portrait: Lorna Campbell

Lorna Campbell
Lorna M Campbell has worked in the domain of open education technology and interoperability standards for over fifteen years and has contributed to the development of a number of learning resource metadata specifications. Phil and Lorna were commissioned by Creative Commons to manage the third phase of the Learning Resource Metadata Initiative. LRMI is co-led by Creative Commons and the Association of Educational Publishers (AEP)—now the 501(c)(3) arm of the Association of American Publishers.

Portrait: Phil Barker

Phil Barker
Phil Barker is a research fellow at Heriot-Watt University who has worked supporting the use of learning technology in Higher Education for twenty years. For much of this time he has worked with Lorna M. Campbell as part of Cetis. His work focuses on supporting the discovery and selection of appropriate resources, and he has contributed to the development of a number of learning resource metadata specifications. He was on the technical working group of the learning resource metadata initiative and has since worked on the third phase of LRMI promoting its uptake and use.

Categories: Learning Resource Metadata Initiative (LRMI) | | search engines | markup languages
Webinar Type: Praxis

Barker & Campbell webinar

Access: Presentation slides

DCMI/ASIS&T Joint Webinar:

How to pick the low hanging fruits of Linked Data

Webinar Date: Wednesday, 21 May, 2014, 10:00am EDT (World Clock: 14:00 UTC

Abstract: The concept of Linked Data has gained momentum over the past few years, but the understanding and the application of its principles often remain problematic. This webinar offers a short critical introduction to Linked Data by positioning this approach within the global evolution of data modeling, allowing an understanding of the advantages but also of the limits of RDF. After this conceptual introduction, the fundamental importance of data quality in the context of Linked Data is underlined by applying data profiling techniques with the help of OpenRefine. Methods and tools for metadata reconciliation and enrichment, such as Named-Entity Recognition (NER), are illustrated with the help of the same software. This webinar will refer to case-studies with real-life data which can be re-used by participants to continue to explore OpenRefine at their own pace after the webinar. The case-studies have been developed in the context of the handbook "Linked Data for Libraries, Archives and Museums", which will be published by Facet Publishing in June 2014.


Portrait: Seth van Hooland

Seth van Hooland
Seth van Hooland is an assistant professor at the Université libre de Bruxelles (ULB), where he leads the Master in Information Science. After a career in the private sector for a digitization company, he obtained his PhD in information science at ULB in 2009. He is currently teaching a special course on linked data at the Information School of the University of Washington. He is also active as a consultant for both public and private organizations.

Ruben Verborgh

Ruben Verborgh
Ruben Verborgh is a researcher in semantic hypermedia at Ghent University – iMinds, Belgium, where he obtained his PhD in computer science in 2014. He explores the connection between semantic web technologies and the web's architectural properties, with the ultimate goal of building more intelligent clients. Along the way, he has become fascinated by linked data, REST/hypermedia, web APIs and related technologies. He is the co-author of a book on OpenRefine and several publications on web-related topics in international journals.

Categories: Metadata Modeling | Transactions on Metadata | Resource Description Framework (RDF)


van Hooland and Verborgh webinar

Access: Presentation slides

2013 Webinars

4 December 2013

Thomas Hickey, Chief Scientist, OCLC

About the Webinar:

Libraries around the world have a long tradition of maintaining authority files to assure the consistent presentation and indexing of names. As library authority files have become available online, the authority data has become accessible -- and many have been published as Linked Open Data (LOD) -- but names in one library authority file typically had no link to corresponding records for persons and organizations in other library authority files. After a successful experiment in matching the Library of Congress/NACO authority file with the German National Library's authority file, an online system called the Virtual International Authority File was developed to facilitate sharing by ingesting, matching, and displaying the relations between records in multiple authority files.

The Virtual International Authority File (VIAF) has grown from three source files in 2007 to more than two dozen files today. The system harvests authority records, enhances them with bibliographic information and brings them together into clusters when it is confident the records describe the same identity. Although the most visible part of VIAF is a HTML interface, the API beneath it supports a linked data view of VIAF with URIs representing the identities themselves, not just URIs for the clusters. It supports names for person, corporations, geographic entities, works, and expressions. With English, French, German, Spanish interfaces (and a Japanese in process), the system is used around the world, with over a million queries per day.

The service harvests some 30 million authority records, enhances them with information from 100 million bibliographic records to produce a file of 20+ million clusters, each representing a person, organization, jurisdiction, work, or expression. In addition to supporting a Web browser HTML interface, the API to VIAF supports content negotiation for other views, such as RDF-XML and MARC-21. Bulk dumps of the VIAF clusters are available under the ODC-By attribution license.

The Webinar will cover some of the challenges VIAF meets in dealing with many different formats and approaches to describing identities, the relationship of VIAF to the source authority files, to other identity systems such as ORCID and ISNI, VIAF's approach to sustainability, governance and persistence, and how ambiguity is recognized and managed.

DCMI/NISO Webinar: Metadata for Public Sector Administration

30 October 2013

Makx Dekkers, Consultant
Stijn Goedertier, Manager at PwC Technology Consulting

About the Webinar:

One key challenge for e-Government programs around the world has been the lack of easily accessible information about the metadata schemas, controlled vocabularies, code lists, and other reference data that provide interoperability among a broad diversity of data sources.

The Asset Description Metadata Schema was developed for exchanging information about such "interoperability assets". The schema was developed with support from the European Commission with the objective of facilitating interoperability across eGovernment programmes in Europe, but it is already proving its usefulness in a wider context, for example to describe specifications maintained by DCMI and W3C. One key implementation of ADMS is in a federation of semantic asset repositories on the Joinup server.

Libraries that collect government information will benefit if such information is based on a set of commonly used schemas, vocabularies and code lists, making it easier to aggregate information from multiple sources. This webinar introduces the ADMS schema and discusses examples of its implementation.

DCMI/NISO Webinar: Implementing Linked Data in Developing Countries and Low-Resource Conditions

25 September 2013

Johannes Keizer, Food and Agriculture Organization of the United Nations
Caterina Caracciolo, Food and Agriculture Organization of the United Nations

About the Webinar:

Open data is a crucial prerequisite for inventing and disseminating the innovative practices needed for agricultural development. To be usable, data must not just be open in principle—i.e., covered by licenses that allow re-use. Data must also be published in a technical form that allows it to be integrated into a wide range of applications.

This webinar describes the technical solutions adopted by a widely diverse global network of agricultural research institutes for publishing research results. The talk focuses on AGRIS, a central and widely-used resource linking agricultural datasets for easy consumption, and AgriDrupal, an adaptation of the popular, open-source content management system Drupal optimized for producing and consuming linked datasets.

Agricultural research institutes in developing countries share many of the constraints faced by libraries and other documentation centers, and not just in developing countries: institutions are expected to expose their information on the Web in a re-usable form with shoestring budgets and with technical staff working in local languages and continually lured by higher-paying work in the private sector. Technical solutions must be easy to adopt and freely available. The webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data cloud.

DCMI/NISO Webinar: Semantic Mashups Across Large, Heterogeneous Institutions: Experiences from the VIVO Service

22 May 2013

John Fereira, Cornell University

About the Webinar:

VIVO is a semantic web application focused on discovering researchers and research publications in the life sciences. The service, which uses open-source software originally developed and implemented at Cornell University, operates by harvesting data about researcher interests, activities, and accomplishments from academic, administrative, professional, and funding sources. Using a built-in, editable ontology for describing things such as People, Courses, and Publications, data is transformed into a Semantic-Web-compliant form. VIVO provides automated and self-updating processes for improving data quality and authenticity. Starting with a classic Google-style search box, VIVO users can browse search results structured around people, research interests, courses, publications, and the like—data that can be exposed for re-use by other systems in a machine-readable format.

This webinar, held by a veteran at the Albert R. Mann Library Information Technology Services department at Cornell, where the VIVO project was born, presents the perspective of a software developer on the practicalities of building a high-quality Semantic-Web search service on existing data maintained in dozens of formats and software platforms at large, diverse institutions. The talk will highlight services that leverage the Semantic Web platform in innovative ways, e.g., for finding researchers based on the text content of a particular Web page and for visualizing networks of collaboration across institutions.

Deployment of RDA (Resource Description and Access) Cataloging and its Expression as Linked Data

24 April 2013

Alan Danskin, The British Library

About the Webinar:

A seminar at the British Library in April 2012 marked the fifth anniversary of a 2007 meeting at which representatives of the Dublin Core, Semantic Web, and RDA communities jointly recommended that the then-draft cataloging standard RDA be provided in the form of vocabularies and application profiles usable for Linked Data.

One year after this anniversary meeting and one year closer to the general deployment of RDA in libraries, this webinar will take stock of progress towards developing application profiles based on RDA and discuss the practicalities of exposing RDA-based data in the Linked Data cloud.

DCMI/NISO Webinar: Translating the Library Catalog from MARC into Linked Data: An Update on the Bibliographic Framework Initiative

23 January 2013

Eric Miller, Zepheira

About the Webinar:

In May 2012, the Library of Congress announced a new modeling initiative focused on reflecting the MARC 21 library standard as a Linked Data model for the Web, with an initial model to be proposed by the consulting company Zepheira. The goal of the initiative is to translate the MARC 21 format to a Linked Data model while retaining the richness and benefits of existing data in the historical format.

In this webinar, Eric Miller of Zepheira will report on progress towards this important goal, starting with an analysis of the translation problem and concluding with potential migration scenarios for a broad-based transition from MARC to a new bibliographic framework.

2012 Webinars

24 October 2012

Brian Sletten, Bosatsu Consulting & Stéphane Corlosquet, Software Engineer and Drupal Developer at MIND Informatics
Thomas Baker, DCMI

About the Webinar:

As described in the April NISO/DCMI webinar by Dan Brickley, is a search-engine initiative aimed at helping webmasters use structured data markup to improve the discovery and display of search results. Drupal 7 makes it easy to markup HTML pages with terms, allowing users to quickly build websites with structured data that can be understood by Google and displayed as Rich Snippets.

Improved search results are only part of the story, however. Data-bearing documents become machine-processable once you find them. The subject matter, important facts, calendar events, authorship, licensing, and whatever else you might like to share become there for the taking. Sales reports, RSS feeds, industry analysis, maps, diagrams and process artifacts can now connect back to other data sets to provide linkage to context and related content. The key to this is the adoption standards for both the data model (RDF) and the means of weaving it into documents (RDFa). Drupal 7 has become the leading content platform to adopt these standards.

This webinar will describe how RDFa and Drupal 7 can improve how organizations publish information and data on the Web for both internal and external consumption. It will discuss what is required to use these features and how they impact publication workflow. The talk will focus on high-level and accessible demonstrations of what is possible. Technical people should learn how to proceed while non-technical people will learn what is possible.

Latest, revised version of presentation

DCMI/NISO Webinar: Metadata for Managing Scientific Research Data

22 August 2012

Jane Greenberg, Professor, University of North Carolina Chapel Hill
Thomas Baker, DCMI

About the Webinar:

The past few years have seen increased attention to national and international policies for data archiving and sharing. Chief motivators include the proliferation of digital data and a growing interest in research data and supplemental information as a part of the framework for scholarly communication. Key objectives include not only preservation of scientific research data, but making data accessible to verify research findings and support the reuse and repurposing of data.

Metadata figures prominently in these undertakings, and is critical for the success of any data repositories or archiving initiative, hence increased attention to metadata for scientific data -- specifically for metadata standards development and interoperability, data curation and metadata generation processes, data identifiers, name authority control (for scientists), Linked Data, ontology and vocabulary work, and data citation standards.

This NISO webinar will provide a historical perspective and an overview of current metadata practices for managing scientific data, with examples drawn from operational repositories and community-driven data science initiatives. It will discuss challenges and potential solutions for metadata generation, identifiers, name authority control, Linked Data, and data citation.

DCMI/NISO Webinar: and Linked Data: Complementary Approaches to Publishing Data

25 April 2012

Dan Brickley, Consultant
Thomas Baker, DCMI

About the Webinar:—a collaboration of the Google, Yahoo!, and Bing search engines -- provides a way to include structured data in Web pages. Since its introduction in June 2011, the vocabulary has grown to cover descriptive terms for content such movies, music, organizations, TV shows, products, locations, news items, and job listings. The goal of is "to improve the display of search results, making it easier for people to find the right web pages." The initiative has emerged as a focal point for publishers of structured data in Web pages, especially but not exclusively in the commercial sector.

This webinar will explore how the publication methods of relate to the methods used to publish Linked Data. Must data providers commit to one or the other, or can the two approaches exist side-by-side, even reinforcing each other?

DCMI/NISO Webinar: Taking Library Data from Here to There

22 February 2012

Taking Library Data from Here to There (PDF, 10 MB)

Karen Coyle, Consultant
Thomas Baker, DCMI

About the Webinar:

Libraries have been creating metadata for resources for well over a century. The good news is that library metadata is rules-based and that the library cataloging community has built up a wealth of knowledge about publications, their qualities, and the users who seek them. The bad news is that library practices were fixed long before computers would be used to store and retrieve the data. Library cataloging practice continues to have elements of the era of printed catalogs and alphabetized cards, and needs to modernize to take advantage of new information technologies. This metadata, however, exists today in tens of thousands of databases and there is a large sigh heard around the world whenever a librarian considers the need to make this massive change.

As with all large problems, this one becomes more tractable when broken into smaller pieces. Karen Coyle will present her "five stars of library data," an analysis of the changes needed and some steps that libraries can begin to take immediately. She will also discuss the "open world" view of the linked data movement and how this view can increase the visibility of libraries in the global information space. This webinar will give an introduction to the types of changes that are needed as well as the value that can be realized in library services. Attendees will learn of some preparatory steps have already been taken, which should confirm that libraries have indeed begun the journey "From Here to There."

2011 Webinars

  • DCMI/NISO Webinar: The RDA Vocabularies: Implementation, Extension, Mapping

This Webinar was held on 16 November 2011 in co-operation with NISO.

The RDA Vocabularies: Implementation, Extension, Mapping (PDF, 7.9 MB)
Thomas Baker, DCMI
Diane Hillmann, DCMI

About the Webinar

During a meeting at the British Library in May 2007 between the Joint Steering Committee for the Development of RDA and DCMI, important recommendations were forged for the development of an element vocabulary, application profile, and value vocabularies [1], based on the Resource Description and Access (RDA) standard, then in final draft. A DCMI/RDA Task Group [2] has completed much of the work, and described their process and decisions in a recent issue of D-Lib Magazine [3]. A final, pre-publication technical review of this work is underway, prior to adoption by early implementers.

This webinar provides an up-to-the-minute update on the review process, as well as progress on the RDA-based application profiles. The webinar discusses practical implementation issues raised by early implementers and summarizes issues surfaced in virtual and face-to-face venues where the vocabularies and application profiles have been discussed.



DCMI/NISO Webinar: International Bibliographic Standards, Linked Data, and the Impact on Library Cataloging

This Webinar was held on 24 August 2011 in co-operation with NISO.

International Bibliographic Standards, Linked Data, and the Impact on Library Cataloging (PDF, 1.4 MB)
Thomas Baker, DCMI
Gordon Dunsire, Consutant

About the Webinar

The International Federation of Library Associations and Institutions (IFLA) is responsible for the development and maintenance of International Standard Bibliographic Description (ISBD), UNIMARC, and the "Functional Requirements" family for bibliographic records (FRBR), authority data (FRAD), and subject authority data (FRSAD). ISBD underpins the MARC family of formats used by libraries world-wide for many millions of catalog records, while FRBR is a relatively new model optimized for users and the digital environment. These metadata models, schemas, and content rules are now being expressed in the Resource Description Framework language for use in the Semantic Web.

This webinar provides a general update on the work being undertaken. It describes the development of an Application Profile for ISBD to specify the sequence, repeatability, and mandatory status of its elements. It discusses issues involved in deriving linked data from legacy catalogue records based on monolithic and multi-part schemas following ISBD and FRBR, such as the duplication which arises from copy cataloging and FRBRization. The webinar provides practical examples of deriving high-quality linked data from the vast numbers of records created by libraries, and demonstrates how a shift of focus from records to linked-data triples can provide more efficient and effective user-centered resource discovery services.

DCMI/NISO Webinar: Metadata Harmonization: Making Standards Work Together

This Webinar was held on 16 March 2011 in co-operation with NISO.

Metadata Harmonization: Making Standards Work Together (PDF, 1.4 MB)
Thomas Baker, DCMI
Mikael Nilsson, Royal Institute of Technology, Sweden

About the Webinar

Metadata plays an increasingly central role as a tool enabling the large-scale, distributed management of resources. However, metadata communities which have traditionally worked in relative isolation have struggled to make their specifications interoperate with others in the shared web environment.

This webinar explores how metadata standards with significantly different characteristics can productively coexist and how previously isolated metadata communities can work towards harmonization. The webinar presents a solution-oriented analysis of current issues in metadata harmonization with a focus on specifications of importance to the learning technology and library environments, notably Dublin Core, IEEE Learning Object Metadata, and W3C's Resource Description Framework. Providing concrete illustrations of harmonization problems and a roadmap for designing metadata for maximum interoperability, this webinar will provide a bird's-eye perspective on the respective roles of metadata syntaxes, formats, semantics, abstract models, vocabularies, and application profiles in achieving metadata harmonization.

2010 Webinars

  • DCMI/NISO Webinar: Dublin Core: The Road from Metadata Formats to Linked Data

This Webinar was held on 25 August 2010 in co-operation with NISO.

Dublin Core in the Early Web Revolution
Makx Dekkers

What Makes the Linked Data Approach Different
Thomas Baker

Designing Interoperable Metadata on Linked Data Principles
Thomas Baker

Bridging the Gap to the Linked Data Cloud
Makx Dekkers

About the Webinar

Created in 1995, the Dublin Core was a result of the early phase of the web revolution. While most saw the Dublin Core as a simple metadata format, or as a set of descriptive headers embedded in web pages, a few of its founders saw it as a cornerstone of a fundamentally new approach to metadata. In the shadow of search engines, a Semantic Web approach developed in the early 2000s, reaching maturity in 2006 with the Linked Data movement, which uses Dublin Core as one of its key vocabularies. This webinar will discuss the difference between traditional approaches based on record formats and the Linked Data approach, based on metadata "statements" designed to be merged across data silo boundaries. Focusing on the dual role of Dublin Core as a format and as a Semantic Web vocabulary, it examines new technologies for bridging the gap between traditional and Linked Data approaches, highlighting how old ideas such as as embedded metadata have been reinvented with new web technologies and tools to solve practical problems of resource discovery and navigation.

DCMI Regional Tutorials

2009 Tutorials

Dublin Core: building blocks for interoperability

These tutorials were sponsored by the Fondazione Rinascimento Digitale and presented in Florence, Italy, 17 December 2009.

History, objectives and approaches of the Dublin Core Metadata Initiative
Makx Dekkers

DCMI and the metadata landscape
Makx Dekkers

Basics of Dublin Core Metadata
Thomas Baker

Data Integration and Structured Search
Thomas Baker

The "metadata record" and DCMI Abstract Model
Thomas Baker

Web-enabled vocabularies
Thomas Baker

Linking legacy data
Thomas Baker

Outcomes of DC-2009
Makx Dekkers

DCMI International Conference Tutorials


The Hague, Netherlands, 21 September 2011.

An Introduction to Dublin Core (PDF, 304KB)
Stephanie Taylor

From Dublin Core to Linked Data (PDF, 2.9 MB)
Paul Hermans

SKOS (Simple Knowledge Organization System (PDF, 2.9 MB)
Antoine Isaac


Pittsburgh, PA, USA, 20 October 2010.

Basic Tutorials

Dublin Core: History, Key Concepts, and Evolving Context (part one) (PDF, 1.8 MB)
Jane Greenberg, Professor, Director of the SILS Metadata Research Center

Dublin Core: DCAM, Syntax, and Semantics (part two) (PDF, 1.3 MB)
Jon Phipps, Lead Scientist Internet Strategies JES & Co.

Transitional Tutorials

Semantic Web & Linked Data (PDF, 16 MB)
Karen Coyle

Six Step SAFARI from the Dublin Core to the Semantic Web (PDF, 2.6 MB)
Ron Daniel, Jr., Elsevier Labs


Seoul, Korea, 12 October 2009.

Basics of Dublin Core Metadata
Thomas Baker

Metadata Standards outside of DCMI
Marcia Zeng

Metadata Interoperability
Marcia Zeng


Berlin, Germany, 22 September 2008.

Tutorial 1: Dublin Core History and Basics
Jane Greenberg

Tutorial 2: Dublin Core - Key Concepts
Pete Johnston

Tutorial 3: Dublin Core and other schemas
Mikael Nilsson

Tutorial 4: Dublin Core in Practice
Marcia Zeng


Singapore, 27 August 2007.

Tutorial 1: Basic Semantics
Stuart Sutton

Tutorial 2: DCMI Basic Syntaxes
Mikael Nilsson

Tutorial 3: Vocabularies
Alistair Miles

Tutorial 4: Application Profiles
Diane Hillmann


Manzanillo, Mexico, 3-6 October 2006.

Tutorial 1: Basic Semantics
Marty Kurth

Tutorial 2: Basic Syntax
Andy Powell

Tutorial 3: Vocabularies
Joe Tennis

Tutorial 4: Application Profiles
Diane Hillmann


Madrid, Spain, 12-15 September 2005.

Tutorial 1: Basic Syntax
Andy Powell

Tutorial 2: Basic Semantics
Diane I. Hillmann

Tutorial 3: Vocabularies
Ron Daniel

Tutorial 4: SKOS-Core
Alistair Miles

Tutorial 5: Metadata Application Profiles
English (Part I)
English (Part II)
Rachel Heery and Robina Clayphan


Shanghai, China, 11-14 October 2004. The Shanghai Library translated the tutorials into Chinese.

An Introduction to Dublin Core
English | Chinese
Diane I. Hillmann, National Science Digital Library

Encoding DC in (X)HTML, XML and RDF
English | Chinese
Andy Powell, UKOLN

Creating an Application Profile
English | Chinese
Thomas Baker, Fraunhofer Society
Robina Clayphan, British Library
Pete Johnston, UKOLN

DC-Library Application Profile
English | Chinese
Robina Clayphan, Co-ordinator of Bibliographic Standards, The British Library

The Dublin Core Collection Description Application Profile (DC CD AP)
English | Chinese
Pete Johnston, UKOLN

Creating and Managing Controlled Vocabularies for Use in Metadata
English | Chinese
Stuart A. Sutton & Joseph T. Tennis, Information School of the University of Washington, Seattle

DCMI Community-Submitted Tutorials

Please note that the listing of the resources in this section does not imply endorsement of any kind by DCMI. The responsibility for the content of these resources lies entirely with their authors.

Institutional Web Management Workshop 2002: The Pervasive Web
United Kingdom

Introducción a los metadatos: estándares y aplicación
Eva Méndez, University Carlos III of Madrid
José Senso, University of Granada

Materials for a Metadata Seminar (1998)
Brian Kelly and Andy Powell
United Kingdom

Metadata Implementation Guide for Web Resources
3rd edition - July 2004
Ad Hoc Committee of Federal Metadata Experts, Metadata Action Team, Council of Federal Libraries
Government of Canada

The Metadata Landscape: State of Minnesota Viewpoint.
( PDF)
Eileen Quam,
Minnesota Office of Technology,
Minnesota Department of Natural Resources
Minnesota, USA

Métadonnées: une initiation - Dublin Core, IPTC, EXIF, RDF, XMP, etc.
Patrick Peccatte

Slides of metadata courses for government librarians in the UK
( PDF)
Maewyn Cumming
Senior Policy Advisor: Interoperability and Metadata
Office of the e-Envoy e-Government
United Kingdom

Why and How to use the Dublin Core Metadata for Health Resources on the Internet: an Introduction
8th European Conference of Medical and Health Libraries - Cologne, Germany
September 16-21, 2002
I. Robu and B. Thirion