Saturday, October 20, 2018

Access to HMT Facsimiles

The Homer Multitext is producing complex data. The complexity is irreducible, since it is our mission to publish digital editions mapped to their manuscript folios, with Iliadic texts associated with commentaries.

This complex data is published as a single CEX file, a plain-text serialization of the current state of the HMT. That data is also exposed via a web service, and an integrated web-application. For more straightforward access, we have published a facsimile view of the data.

This post is to announce the Homer Multitext Facsimile Index application, allowing users to access HMT data based on Iliadic citations, e.g. 2.100 (individual passages), or 2.1-2.10 (ranges of passages).

Because traditional citations assumed an audience of (clever, intuitive) human readers, some traditional practices do not translate to a computational environment. For example "1.1-10" is not a valid, that is, unambiguous citation. Does it mean "from 1.1 to 1.10" or "from Book 1, Line 1, through all of Book 10"? The unimaginative machine will assume the latter. So with this app, and with CITE data generally, users must be verbose and specific: 1.1-1.10, with [book].[line] on both sides of the hyphen.


As with all expressions of HMT data, this application was build with the CITE Architecture code libraries in Scala and Scala-JS.

Thursday, July 19, 2018

The Homer Multitext Microservice

The Homer Multitext produces integrated data on Greek Epic poetry, its language, its evolution over time, the traditions of scholarship surrounding it, and the physical artifacts, manuscripts and papyri, that are our only evidence. For a concise explanation of what the HMT publishes, please see https://github.com/homermultitext/hmt-archive/blob/master/overview.md.

At the same time, we Project Architects of the HMT, Neel Smith and Christopher Blackwell, are interested in making this data as widely accessible as possible. The data is released in CEX Format, a plain-text serialization of data organized according to defined abstract data models. We have developed code libraries in Scala implementing these abstract data models. These libraries provide the greatest flexibility in manipulating, locating, aggregating, and transforming the data of the Homer Mulititext.

For users who may not want to write code directly, we have provided an online application offering a graphical user interface for interacting with HMT data using the Cite Architecture’s Scala libraries.

For those who might want to write their own applications that interact with the HMT data, we provide a collection of microservices.

The examples below demonstrate the Scala Cite Services (Akka) application, SCS-Akka, running at beta.hpcc.uh.edu/scs/, and (as of July 19, 2018) serving data from the 2018g Release of the Homer Multitext Data.

The service accepts requests via HTTP, and returns JSON expressions of CITE objects. We have published a library in Scala for de-marshalling those JSON expressions into CITE data objects.

The CiteApp web-based application for the Homer Multitext gets its data from this service, and indeed the web-application and the microservice were developed jointly.

This collection of microservices is serving current data from the Homer Multitext, edited by Casey Dué and Mary Ebbott, a project of the Center for Hellenic Studies of Harvard University.

For more information on this service, please see https://github.com/cite-architecture/scs-akka.

For information on the CITE Architecture, please see https://cite-architecture.github.io.

Report bugs by filing issues on GitHub.

Texts

About the Service’s Catalog

See the Text Catalog

Get the First Valid Reference in a text

Get Valid References

All references for a version of a text:

Valid references for parts of a text:

Get Passages

Passages for a specific version of a text:

Passages for all versions of a text:

NGrams

NGrams in works present in the library:

Find citations to NGrams:

Returning a Corpus of Passages containing an NGram:

String Searches

Token Searches

Collections of Objects

Catalog

Objects

Get objects from multiple collections:

Finding Objects

urn-match

regexmatch

stringcontains

valueequals

numeric less-than

numeric less-than-or-equal

numeric equals

numeric greater-than

numeric greater-than-or-equal

numeric within

Data Models

Images

Basic Image Retrieval

Defining a width

Defining MaxWidth and MaxHeight

Embedding

  • 12-recto
  • 12-recto-detail

Relations

CITE Relations are associations of URN to URN, with the relationship specified by a Cite2 URN.

Commentary Data Model

If a library includes CiteRelations and implements the Commentary datamodel, comments associated with passages of text can (optionally) be attached to replies for a corpus of texts.

Documented Scholarly Editions (DSE) Data Model

The DSE Data model consists of a CITE Collection of objects, each documenting a three-way relationship between (a) a text-bearing artifact, (b) a documentary image (ideally with a region-of-interest defined), and © a citable passage of text.

(The dse=true parameter is valid for all object-searching, as well as for retrieval of individual objects or ranges of objects.)

Wednesday, May 30, 2018

Homer Multitext 2018d Data Release

We are pleased to announce the 2018d release of Homer Multitext data. This is the fourth release of 2018. With each release, we try to improve our automated validation and machine-assisted verification, and to improve integration of this data through refinements to the data models.

This is the work of over 170 editors.

A guide to understanding HomerMultitext data is online.

All current data is on the project’s GitHub site. The current release, 2018d, is in the releases-cex subdirectory.

The work of the Homer Multitext is focused on scholarly data. At the same time, we are interested in providing useful access to this data in as many ways as possible. With the 2018d release, we are also pleased to provide these new tools:

Thursday, January 4, 2018

Publishing the Homer Multitext project archive

The Homer Multitext project (HMT) is changing its publication practice in 2018.  All of our work in progress remains available from publicly visible repositories hosted on github, but we are adopting a new format for integrating material from our working archive into publishable units.

Our goals have always been first to specify a model for all HMT data structures independent of any publication format, and then to select a format that fully captures the semantics of the model.  In choosing a format for publication, we prefer one that, while completely expressing the model, is as simple as possible.  It should be intellegible both to human readers and to software, and readily usable by the widest possible range of digital tools.

Beginning in 2014, we adopted the TTL serialization format of the   Resource Description Framework (RDF) to integrate textual editions, data about physical artifacts like manuscripts, and documentary images into a single publishable file.  RDF was designed to facilitate dynamic exchange and automated linking of resources on the world wide web, and is widely used for that purpose in the digital humanities community today.  As a format for disseminating stable releases of HMT content, it is not ideal, however. RDF can be quite verbose:  to represent a single citable node of text in one of our editions, for example, requirs more than a half dozen separate RDF statements.  It is often not immediately intellegible to human readers, and although the RDF model can be implemented in multiple formats (JSON and XML, in addittion to TTL), RDF data can only be practically used with software specifically aimed at RDF processing.

This month, we are releaseing our first published data sets in the CITE Exchange format (CEX).  To quote the CEX specification, CEX is "a plain-text, line-oriented data format for serializing citable content following the models of the CITE Architecture."  CEX makes it possible to represent any of the fundamental models of the HMT archive — texts, citable collections of objects, and the complex relations among these objects that our archival data sets encode — as simple tabular structures in labelled blocks of a plain text file you can inspect with any text editor.  All blocks in a CEX file are optional, so we can equally easily publish a single updated body of material — a new set of photographs of a manuscript, or a newly edited section of a text — or an entire compilation of our current archive in a single plain-text file.  Because each CEX block is a table represented as lines of delimited text, generic tools from spreadsheets, databases, or ancient command-line utilities like `sed` and `grep` can be directly applied to CEX data, in addition to specialized code libraries we have developed that understand the semantics of citation with URNs.  (See https://cite-architecture.github.io/ for more information about the cross-platform code libraries.)

As a result, over the coming weeks you will see a series of short announcements of releases as we test and release one portion of our archive at a time.

Happy New Year, with complex data in simple formats!