FactGrid talk:PhiloBiblon: Difference between revisions

From FactGrid
Jump to navigation Jump to search
(→‎First steps: new section)
Line 348: Line 348:
There are a probably fewer than 100 places in Italy as well as small numbers from most other European countries, from the U.S., and from Latin America.
There are a probably fewer than 100 places in Italy as well as small numbers from most other European countries, from the U.S., and from Latin America.
We can locate these manually in Wikidata prior to export. --[[User:Charles Faulhaber|Charles Faulhaber]] ([[User talk:Charles Faulhaber|talk]]) 23:19, 13 June 2021 (CEST)
We can locate these manually in Wikidata prior to export. --[[User:Charles Faulhaber|Charles Faulhaber]] ([[User talk:Charles Faulhaber|talk]]) 23:19, 13 June 2021 (CEST)
== First steps ==
For each data type = entity = PhiloBiblon table (Analytic, Bibliography, Biography, Copies, Geography, Institutins, Libraries, Ms/Ed, Subjects, Uniform Title) the following steps need to be done
1. Deduplication of records
2. Data clean-up. In the process identify Wikidata/FactGrid Q# for Geography, Biography, Institutions, Libraries, to the extent possible.
3. Definition of mapping between PhiloBiblon fields and "dataclips" (controlled vocabulary) and FactGrid P# and Q# (e.g., PhiloBiblon date of birth = FactGrid P77).
4. identification of PhiloBiblon entities with FactGrid Q entities (e.g., Fernando III, el santo, king of Castile and Leon = BETA bioid 1110 = BITAGAP bioid 2054 = BITECA bioid 6515 = FactGrid Q254531).
5. Import PhiloBiblon CSV records from Windows PhiloBiblon into FactGrid taking account of points 3 and 4.
6. Enrich records with data from external sources, e.g. VIAF.
Some of these steps can be carried out in parallel, i.e., Steps 1-4
The order in which entities should be imported into FactGrid needs to be decided. It seems clear that Geography should go first, because its data are used in Biography, Institutions, Libraries, Ms/Ed, and Uniform Title.
Institutions and Libraries should come next because the number of records to be imported is relatively small, and many of them will already have Wikidata Q# if not FactGrid Q#.
Biography (Persons) would come next because its records are needed for Analytic, Copies, Ms/Ed, and Uniform Title. There are some 40,000 Biography records in total, the vast majority of which will have neither Wikidata nor FactGrid records.
Uniform Title, Ms/Ed, Copies, and Analytic should follow, in that order.
This is in fact the same order set forth in the original Work Plan in the NEH proposal. It is worth noting that we are already starting on work Geography that in the Work Plan is scheduled for October.
--[[User:Charles Faulhaber|Charles Faulhaber]] ([[User talk:Charles Faulhaber|talk]]) 23:56, 13 June 2021 (CEST) (for Josep Maria Formentí)

Revision as of 22:56, 13 June 2021

Further Project Pages

1st Web-Meeting, 18 May 2021

Here are the notes from the meeting 17 May 2021:

Philobiblon Meeting Notes

Dear Colleagues,

Here's what we have in the Workplan for this summer:

Objectives: Review of Wikibase software: standards, protocols, data formats, and implementations; comparison with PhiloBiblon schema and data dictionaries and current and desired functionalities. Scenarios to describe how contributors and users will interact with PhiloBiblon. Functional specifications for the features required for those scenarios. Identify a small set of particularly rich and related records (number TBD) from each of PhiloBiblon’s four databases (BETA, BIPA, BITAGAP, BITECA) and ten tables to serve as test cases. Ongoing data clean-up in legacy PhiloBiblon Tasks: T1. Review Wikibase software: standards, protocols, data formats, implementations (PI, Anderson, Formentí, Simons) T2. Develop user scenarios based on PhiloBiblon data and current and desired functionalities. (PI, academic staff, Adv. Board: Dagenais, Gullo) T3. Create functional specification for features needed to re-create these functionalities. (PI, Anderson, Formentí) T4. Identify set of test target records (PI, academic staff) T5. Ongoing clean-up of legacy data (PI, academic staff)

5/18/21 Meeting Notes

Attendance: Adam Anderson (zoom host), data analyst for project, Berkeley Lecturer Charles Faulhaber, Project PI, and (new) director of Bancroft Library, Josep Formenti (software engineer in Barcelona, also knows NLP and webdev), Olaf Simons, book historian at Uni Erfurt, Gothe. WikiMedia platform & FactGrid Randal Brandt, head of cataloging at UCB, rare books, Xavier Agenjo (director of projects for Fundación Ignacio Larramendi in Madrid, creator of more then 40digital libraries), Daniel Gullo (dgullo@csbsju.edu) director of collections, cataloging, creating databases of libraries, modern digital collections, controlled vocabulary for underrepresented religious traditions in the middle ages (vhimmel online database, NEH project director) Óscar Perea Rodriguez, lecturer at USF, working with Charles on this since 2002 Jason Kovari (cataloging rare books), Cornell Robert Sanderson, director of digital collections at Yale Cliff Lynch, director of a small nonprofit in DC Coalition for Networked Information, in the School of information at UC Berkeley; worked on the predecessor of the CDL. John May (phone): software dev., information management systems, designer of PhiloBiblon (software) John Dagenais: Professor of Spanish at UCLA, degree in library science, user of PhiloBiblon

PhiloBiblon Project: Goes back to 1975, as a spin-off of the Dictionary of the Old Spanish Language project at U of Wisconsin-Madison, a Spanish version of the OED. Based on contemporary uses of the language. To do this they created an in-house data base (1975 onward): Bibliography of Old Spanish Texts (BOOST), lineal ancestor of PhiloBiblon. Constant technology change. Put the collection on CD-OM discs by 1992, with digital images + text. 1994 with the internet (Netscape). Charles was teaching a course on DH computing (including gopher, OCR, etc). One of his students introduced him to the World Wide Web and he said that it wasn’t going to be important… Since then they’ve been working on keeping up with the different versions (1.0, 2.0, 3.0 = Linked Open Data with RDF). Currently Exports data from Windows PhiloBiblon into XML files to upload to the server at Berkeley, where XTF (eXtensible Text Framework), run from CDL takes large XML files from 9 PhiloBiblon tables (uniform-title, manuscripts/editions, persons, etc.) and parses each of these files into individual records for querying. Objective: to get us aligned with the Wikibase / Wiki-foundation, to piggyback on their data and technology moving forward.

Olaf Simons: FactGrid (works on Masonic institutions) Charles found Olof through commentary in their blogpost. Wikibase is the software behind the Wikidata project. It’s developed between 2012-ongoing. Used by national libraries worldwide to control data: creating triple-base statements--2 entities linked by a relation--(annotate with statements) and further develop metadata. You can see who is editing it by the minute. Each entity has a Q-number, e.g., for each person (gender, address, field of research, etc.) you can add as many statements as you want, and you can qualify these statements. You can add references to the entry (as many as you want). Used as a source for anything imaginable, we collect statements that become entries. Using SparQL you can query this: e.g. Illuminati: members list Q: date of birth? and it appears as a column for each member SparQL no longer connects text input fields. It normally connects items to items. E.g. a person is a member of a lodge, which is another item which contains informations with statements. Each item is a database object. We will have to translate input field information into objects, and for that to work, each object needs a statement. First Point: We don’t deal with books, instead you have people, places and institutions connected to these books. All of these need statements to work. Produce a network of items of relations. Consider the types of objects (e.g. Geographic names, Proper Nouns, etc.). We’ll need to get an idea of the number of items you’ll be creating beforehand. Should create your own team and get accounts for those members, along with managers of the accounts to run the team independently. Charles: For Spanish texts (6K texts) 8K individuals PhiloBiblon “Data clips” correspond to P properties in Wikibase.

Creating properties is not the issue. You create properties as needed and link them on a text-by-text basis.

The real problem is to understand the types of objects you have to create, and how they’re interconnected. They need to be created before you can link them.

E.g. often we start with places, and move from there. First is to create the objects, you then interconnect it.

Usually projects come to us with a spreadsheet…

Foreseeable Problems: The entire project is multi-lingual (using P-numbers, Q-numbers (Deutsche Nationalbibliothek GND)), which allow for different labels on these items and properties. You should use it in the language of your sources. The database currently accommodates French, English and German (the main users of the database). They put their labels in the database. You will need to make labels in Spanish and Portugese. Programming issues: the software is not designed to create presentable projects at the moment. This is ongoing work: 1) the FactGrid viewer, which creates pages based on information from the database, created on the fly. You will want something similar, for your own data. Bruno Belosst is the creator, you can contact him. This is where Wikibase is currently underdeveloped. A ‘Knowledge’ tool that is in development which can also show the metadata for each entity. Otherwise you get the SparQL query functionality.

Robert Sanderson to Everyone (12:32 PM) We used XTF at Getty for our archives: https://xtf.cdlib.org/ For what it’s worth, at Yale we’re doing this across the libraries, archives and museums. The current data is 45 million such entities, spanning 2.5 billion triples uses the same process at Yale. The difficulty is understanding the modeling and doing the transformation. The database side is essentially free. We have 15M MARC records, which get turned into 45M entities, people, places, concepts, objects, images, digital things. (8 classes) We’re not using Wikibase, but rather the main library standards by themselves The difference between the data modeling paradigms will necessitate the creation of new identifiers But wikibase is really good at managing external, typed identifiers :) We should write down the things you want to refer to.. e.g. ‘ownership’ (provenance in the museum and library world) if that’s something you want to refer independently to the object. From there you can work out the CSV tables and get them into the system Not to be a broken record, but the modeling also determines the possibilities for the queries. If you have (for example) place-written and date-written, on a text that can have multiple authors writing at different times and at different places, then you need some way to connect author / place and time *in the model*

Daniel Gullo: Biblissima is using this model.: https://data.biblissima.fr/w/Accueil question about Q-numbers… Warning: scholars use the MS-ids (manid), text-ids (texid), and other IDs established in the PhiloBiblon system.

Olaf: You will create external IDs (vital for places, e.g.) for any item in PhiloBiblon, and they will be linked to the IDs from PhiloBiblon. The Q-numbers are merged based on double entries (there’s movement, they are not deleted, but merged). There are ways of creating more key-value pairs without deleting IDs.

This is similar to the database we’re developing… When you’re migrating your data, you want to think of it hierarchically with a controlled vocabulary: 1) Continents, 2) Geographic names, 3) institutions, 4) other entities.

Jason Kovari Is the plan to assess what users really want? (or move forward with what you have?) Is part of the workplan assessing the work that you initially have? Jason Kovari (he/him) to Everyone (12:50 PM) The word I was forgetting earlier was Affinity. There is a Wikidata Affinity Group as part of the LD4 Community: https://www.wikidata.org/wiki/Wikidata:WikiProject_LD4_Wikidata_Affinity_Group

Charles: There’s an orthogonal relationship between the ‘library world’ and what we’re doing. PhiloBiblon has evolved over 35 years in response to the needs of the user community as represented primarily by the members of the four PhiloBiblon teams (for Spanish, Portuguese/Galician-Portuguese, Catalan, Golden Age poetry). To what extent do we need to accommodate what the library world has been doing? We’re not reimagining PhiloBiblon, but rather trying to move it into the web-based wiki world. We want to incorporate external authority files, such as Virtual International Authority file (VIAF), Getty Thesauri, Denis Muzarelle, Vocabulaire codicologique Rob and Olaf: Suggest starting with a small number of records (e.g. 2 or 3 codices based on your needs). This has been done. Choose complex items (10 of each sort) and go from there. E.g. items for each: Books (with Q-numbers for all the books) Toponyms Institutions etc.

What’s the process of getting PhiloBiblon into Wikibase? Use quick statements (or API) / spreadsheet into the machine. It takes a few hours to feed the data into the machine, but it’s simple. The benefit of a CSV import is you can use OpenRefine to augment the identifiers and reconcile data. How many items on the largest spreadsheet?

John May: (project janitor, coder) to perform rectifications and homogenization in the Windows database. It’s straightforward to export in CSV / spreadsheet. They are exceptionally large. The problem might be that the creation of Q and P numbers can be done programmatically, but that ontology will need to be made clear. Design desiderata: to accommodate ambiguity and uncertainty. So PhiloBiblon is made to be flexible and customizable. He’ll await the orders from Olof as to how he wants the data. The PhiloBiblon DBMS is an n-dimensional dynamic data model (which can be normalized as needed) ca. 60 mb for each bibliography (spreadsheet); no fixed field lengths; any structure / sub-sub-structure can contain structure - it’s all based on arrays… 6K texts 14K witnesses John can write the code and tag the texts with P / Q numbers that would correspond to the Wikibase PhiloBiblon is not a standard data model… so it will take some time to get the CSV…

How is PhiloBiblon used currently? Charles: the primary use of PhiloBiblon currently:

For editors of texts: How many Mss / printed editions are there of a given text? Descriptions of MSS There are lots of other potential queries based on data in PhiloBiblon, but it is currently difficult to extract it, e.g.:

Codicology: How many Mss that have gatherings of 12 leaves, and where do these Mss come from? Where and when are particularly catchword of signature types used in gatherings?

Prosopography: What individuals were active in a given location in a given time period

Oscar: there’s not a lot of interaction with PhiloBiblon, although some users do download forms from our Collaborate page, fill them out, and return them for incorporation into PhiloBiblon Xavier: demo of SparQL in a nice web editor from the Biblioteca Dixital Galiciana… (Olaf was impressed with the editor)

Olaf: FactGrid is a limited user database, so only users with accounts can change things. Everyone in the project will get an account. Users like the ones Charles describes can be given an account. There’s no central form for making ‘issues’, you just change things when you see a mistake. People who make a change add a reference about the change. You can keep mistaken information on the database to make sure people don’t change it back. It remains on the database ‘downgraded’ as mistaken information. Qualifiers are created for hypothetical statements. The public can see everything and do any search but it’s not open to the public. Once you have your data in qualified triples, it will be used by different users depending on their own interests..

Charles: Charles: We want to facilitate crowd-sourcing in order to tap into Hispanists all over Europe who can describe MSS in their local libraries and add tht information to PhiloBiblon directly.

Olaf: “spread accounts” - We can do his by giving everyone interested in contributing an account. Wikidata doesn’t have working hypotheses, they want facts, not theories. Whereas FactGrid allows for the type of knowledge (hypothetical, guess, needs to be substantiated) You make editorial control / blocking items… You can put items on a watch list and check who edits it. Otherwise everyone is allowed to edit and we watch the items as they get edited. Olof will give people accounts (based on email addresses, website / institutional titles), but he can also give admin accounts and show you how to create these accounts…

Charles: Would it be useful to add a field in PhiloBiblon for a WikiBase Q / P number? (as we correct our data),

Yes, it’s advisable FactGrid will show these for Wikidata numbers this will be an interactive process, back and forth, until we get it right…

John May: In the production of these spreadsheets, what are we trying to do? E.g. there are 200 cells (with structure within, including P-numbers, and Q-numbers). In a normalized table, we would break these up, but they are not in that format yet. How are P / Q numbers assigned? —— Tasks: Get familiar with FactGrid & Wikibase Adam will be working on linking the entities between PhiloBiblon and Wikibase. Get admin account from Olaf (so you can create accounts for the project) Take the framework and CSV which John May will make and work with Olaf, Josep, and Jason Once we have the CSV, contact Jason Kovari: questioning whether we need entities or just metadata for the text? Josep will work on obtaining the entities from the texts (see interface). Josep will work on making an API for easy input options Rob: We should write down the primary things you want to refer to.. e.g. ‘ownership’ (provenance in the museum and library world) if that’s something you want to refer independently to the object. From there you can work out the CSV tables and get them into the system Everyone: Communicate all problems on the project page, so anyone can see the discussion that led to certain solutions (so our models get adopted). Transparency is key to that to follow your line of thinking.

Dan Gullo's suggestions (17 May 2021) for MSS to use as test objects

When choosing records, I would suggest some basic criteria (I put this here before my suggestions).

Well-established authors that are found in multiple collections Well-established authors that are found in multiple languages Well-established authors that may be found outside of Spanish or Portuguese libraries Complex author-title relationships, that may involve a known translator and a known author Complex title-title relationships, perhaps a commentary on a known work. Complex titles with multiple variants, perhaps by language Works that are in print and manuscript Works with established bibliography

I would suggest Gonzalo de Berceo and known works to deal with a major author

  • BETA bioid 1211

I would suggest Isaac of Ninevah and the translations and printed editions of his works (serious issues to wrestle with here because of the need to reconcile your data so that one author moves from 3 ids in Philobiblon to one Qid in Wikibase)

  • BETA bioid 1186 BETA bioid 1398 BITAGAP bioid 1079

I would suggest Pseudo-Seneca for the same reasons as Isaac of Ninevah

  • BETA bioid 1192 BITAGAP bioid 1107 BITECA bioid 6379

For particularly interesting records to think about the complexity of data migration, here are three.

Very complex record

  • BETA manid 1567

Record which is not a complete manuscript, so how do you want to represent a part of a manuscript, and not a complete manuscript.

  • BITECA manid 2554

For a printed book with complex data

For institutions, you can use mine for fun, because you have it three time in Philobiblon, all with three different names.

BITECA libid 1128 BITAGAP libid 852 BETA libid 668


Could you please set direct inks into the Items? This is how it is done for MS: Toledo: Biblioteca Capitular, 43-13 (1344-03-04):

Test Items created in 2019

You can access al three objects also if you use the established IDs like "BETA manid 1106" as short cuts in the search field above. --Olaf Simons (talk) 16:01, 23 May 2021 (CEST)

Do links to PhiloBiblon have to be dynamic?

I am asking this because I tried to create an External identifier from FG to PB.

The external identifier could lead directly into the data set if I had a link with a stable environment and just the ID changing. I would replace the ID position in the URL with $1 and be able to link from the number into the PB item page... Seems that does not work, as I see you do not give any links into your own database...

Just by the way: sign all comments with ~~~~ - that creates automatic and dated signatures when you press save. --Olaf Simons (talk) 22:55, 19 May 2021 (CEST)

PhiloBiblon URLs are created dynamically, but they are stable. We link to them all the time from external web pages, such as in our blog. Typically we hide the URL strings within / underneath the text description. I note that when I copy such items to Wikidata the underlying link disappears and has to be added as a real URL. Thus BETA manid 1106 (Toledo: Biblioteca Capitular, 43-13 (https://pb.lib.berkeley.edu/xtf/servlet/org.cdlib.xtf.dynaXML.DynaXML?source=BETA/Display/1106BETA.MsEd.xml&style=MsEd.xsl%0A%0A%20%0A%20%0A%20&gobk=http%3A%2F%2Fpb.lib.berkeley.edu%2Fxtf%2Fservlet%2Forg.cdlib.xtf.crossQuery.CrossQuery%3Frmode%3Dphilobeta%26mstype%3DM%26everyone%3D%26city%3D%26library%3D%26shelfmark%3D13+43%26daterange%3D%26placeofprod%3D%26scribe%3D%26publisher%3D%26prevowner%3D%26assocname%3D%26subject%3D%26text-join%3Dand%26browseout%3Dmsed%26sort%3Dtitle)

Charles Faulhaber (talk) 22:16, 20 May 2021 (CEST)

It means that you cannot use the external identifiers as you spell them in our external identifiers. See the GND- or Google Properties and note the Formatter urls Property and how it is handled:
I have managed a Formatter URL on Q164503 (Biblioteca Capitular de Toledo) but I had to state "1106BETA.MsEd" to make that work, since "BETA manid 1106" does not occur in the URL that has to be created.

PhiloBiblon Project Page in multiple languages

I would like to add the Spanish, Catalan, and Portuguese versions of this basic description of PhiloBiblon 21:35, 20 May 2021 (CEST)


Two ways: You log in in the language which you want to add, and then you will also see the Properties that should be translated as well.

Option 2: You use the Quickstatements input. Make your Statements in Excel or on a Google Spreadsheets and then paste them into the QuickStatements field and use Version 1 input. This is the content for the three columns of a basic triple Statement that sets a Label, Description or Alias.

QNumber - Les - "Label in Spanish"
QNumber - Des - "Label in Spanish"
Qnumber - Aes - "Alias in Spanish"
QNumber - Lca - "Label in Catalan"
...

all the other Language codes: here.

Recommendable:

  • Create Items always with a Batch input (see the menu) with all the languages you need.
  • Work always in the language of your sources and translate Properties into your language while you are using the software (or when you feel you have nothing better to do - that's the day when you can translate all 600 Properties and their descriptions into Spanish and Catalan). --Olaf Simons (talk) 21:48, 20 May 2021 (CEST)

Linking pages to sub-pages

I created a new item:

BETA / Bibliografía Española de Textos Antiguos

Now I want to link it to the PhiloBiblon page as "part of".

It's obvious that the first thing I need to do is take the Wikidata tours: https://www.wikidata.org/wiki/Wikidata:Tours

Charles Faulhaber (talk) 21:58, 20 May 2021 (CEST)

If you mean you want to say the item is part of this page - don't. We are not in the Database, just on a Wiki page. But you could create an Item PhiloBiblon and you could then make that statement. --Olaf Simons (talk) 22:16, 20 May 2021 (CEST)
The best thing to do will be to create a PhiloBiblon research statement and to link that with the P131 property: "Research that contributed to this data set". The cool thing is here that you can connect this "research statement" to complex information about the team and publications that are connected to this. This will also be the easy way to ask for all database items of your project. --Olaf Simons (talk) 16:06, 23 May 2021 (CEST)

Adam Anderson (talk) 02:33, 26 May 2021 (CEST) The SparQL no longer connects text input fields. It normally connects items to items. E.g. a person is a member of a lodge, which is another item which contains information with statements. Each item is a database object.

Advice for beginners

I have been doing some little experiments on the FactGrid PhiloBiblon page, with much help from Olaf. We're keeping all of our discussions there. One thing he told me of importance is that you should sign all of your interventions with with four tildes, as I am doing here. You will see them at the bottom when you edit this page. When you look at the non-editable page you will see "Charles Faulhaber (talk) 22:07, 20 May 2021 (CEST)" A little learning is a dangerous thing, so I'm going to take some time to look at the various Wikidata: Tours (https://www.wikidata.org/wiki/Wikidata:Tours )before I dive back into this. Charles Faulhaber (talk) 22:10, 20 May 2021 (CEST)

Recommended also: our help section. It might be more to the point on FactGrid, and it will profit from users who reformulate things as they would have needed them. (Also welcome: language improvements, wherever the German user has failed.) --Olaf Simons (talk) 11:09, 21 May 2021 (CEST)

Project Dimensions

Present stats:

TABLA BETA BITAGAP BITECA BIPA
ANALYTIC (witnesses) 14692 52084 12239 89926
REFERENCES 7270 21558 5976 472
PERSONS 7423 32309 3473 3418
GEOGRAPHY 1814 4759 840 211 *
INSTITUTIONS 794 3297 585 4
LIBRARIES 915 455 420 119
MANUSCRIPTS & IMPRINTS 5168 5886 1971 1572
COPIES OF PRINTED BOOKS 4157 1146 1473 137
SUBJECT HEADINGS 339 34 149 126
WORKS (texts) 6034 31962 6173 89913
TOTAL 48606 153490 33299 185898 421293

* Places will definitely recommend an input of all Spanish places (deserted villages to cities, some 35,000) from sources that come with external identifiers and with geographic coordinates. --Olaf Simons (talk) 11:46, 21 May 2021 (CEST)

The best way to start the input is probably 1: all places, 2: all libraries, 3: All institutions... - as these things will work without your texts and codices, while the texts and codices will not work without these. Another approach would be to run a first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers as fast as possible. Once you have the matches you can run all subsequent inputs with greater ease. --Olaf Simons (talk) 16:15, 23 May 2021 (CEST)

In fact, this is what we had suggested doing as the first steps, starting with these three entities, primarily because they are the least complex in our data model. Although, as you point out, they have fewer dependencies. Toponyms have internal dependencies. Thus, cities belong to states or provinces ... --Charles Faulhaber (talk) 20:36, 23 May 2021 (CEST)

Running "first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers" is a wonderful idea. Needless to say I don't knw what that implies. Presumably our programmer John May will have to generate a list of PB identifiers. For every record we have a "moniker," a constructed field that serves to disambiguate similar records. Thus the moniker for persons consists of First and Last Name, followed by the person's title and dates, or some other identifying information. --Charles Faulhaber (talk) 02:51, 26 May 2021 (CEST)

Input efficiency: how to map controlled vocablary to P#/Q#

This Friday 1800 CST / 0900 PDT / 1200 EDT works for me. I've also invited Josep Formentí, who is in Barcelona, who has been looking at the FactGrid api.

We are currently focusing on cleaning up and coordinating our data in the four PhiloBiblon bibliographies.

Database designer John May is experimenting with CSV exports of data from the Biography/Persons table to allow us to manipulate it more easily to find errors, inconsistencies, and duplicates. Josep just mentioned to me a deduplication library. This is something that may be useful once we're a little farther in to the process.

In addition to Adam's questions I want to follow up with Olaf on two issues:1. What's the most efficient way to compare our lists of controlled vocabulary with existing FactGrid P# and Q#?2. How would we "run a first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers as fast as possible"?3. How do you log in to FactGrid in Spanish?

Yashila, glad to meet you,

-- Charles Faulhaber (talk) 00:57, 10 June 2021 (CEST)

Just briefly:
  • You can only use the database's wiki in Spanish if you have an account. Your account opens the menu top left with the "preferences" link that allows you to configure FactGrid. All people connected to this project should have accounts.
  • The entire property vocabulary is in the FactGrid:Directory of Properties. We should get a Spanish version of this structure in the course of this project. What we need here is English/Spanish speakers who go through all the 613 Properties top down. They all need Spanish Labels and descriptions to work. Google translate helps. Maybe this is a job for a student assistant if you have one, maybe it is interesting work anyone as it will make you familiar with the vocabulary.
  • The page for collective data modelling on Codices should be this one FactGrid:Data model for manuscripts. Do not hesitate to edit it, insert questions where you feel you need more clarity, restructure it as you need. All Wiki pages have a version history that allows us to compare states this pages has been in, nothing can get lost, nothing can be destroyed. The FactGrid player who created it is User:Marco Heiles, and he should join you on data modelling debates as he is presently experimenting with some 200 codices of his PhD thesis. You are not bound to his decisions since we are using triples - so you can have any triple you want without affecting anyone on board. But we gain project cohesion if we find a language of triples that works for all. The names of the properties are irrelevant, the important thing is that we use them the same way.
  • Duplicates in biographies: We should avoid all duplicates by setting Wikidata reference numbers wherever a Wikidata-Item exists. The cool thing about the Wikidata reference is that you cannot create two items on FactGrid with the same Wikidata reference number. Question for John May is then: can you map your names against Wikidata?
  • How to run a first input that just creates all the objects, so that we have a list of FactGrid-Q-Numbers on corresponding PhiloBiblon IDs. We do the first input via CSV. This is a first input I prepared with a colleague - not yet in the machine: Google Spreadsheet a complex first input. If you go for a simpler input you should still have:
  • Labels in as many languages as you require (English/ French/ Spanish/ Portuguese (?)/ German)
  • Descriptions at least in English, FactGrid's default language.
  • A P2 (what is it?) statement on all items that flow in (a codex, a human being, a place...)
  • Maybe the PhiloBiblon ID if that helps you to to organise the matches.
  • Question on my side: is there a Spanish register of all places from villages to cities? Could we match that with Wikidata and PhiloBiblon places in order to make sure that we create a unified geographical data environment that won't require us to create and to geo-reference ever new places at later stages?
It might be good to create a Spanish Help Section on FG and I'd recommend that all new users take a brief tour through the English help section. Clarify things where I did not see the difficulties, demand clarification where I failed. New users are the best to do that as only they have the perspective to see the deficits in my/our explanations. --Olaf Simons (talk) 09:03, 10 June 2021 (CEST)

Spanish and Portuguese places

  • We will get all Spanish places with two Wikidata searches: All INE pace codes P772 (some 9000 places) and a second search all villages (to grab small things that escaped the code).
  • There is a complementary Wikidata property P6324 for all Portuguese places (some 3500), plus 182 villages

The thing to consider is: what else to grab on the same searches - as it will be easy to import useful data on that move. It will be worth to take a look at the Wikidata-items to round up a shopping list of data to grab. The entire input will be about 1300. (Less than the French input, more that the Swedish - easy to handle). --Olaf Simons (talk) 23:10, 13 June 2021 (CEST)

There are a probably fewer than 100 places in Italy as well as small numbers from most other European countries, from the U.S., and from Latin America. We can locate these manually in Wikidata prior to export. --Charles Faulhaber (talk) 23:19, 13 June 2021 (CEST)

First steps

For each data type = entity = PhiloBiblon table (Analytic, Bibliography, Biography, Copies, Geography, Institutins, Libraries, Ms/Ed, Subjects, Uniform Title) the following steps need to be done

1. Deduplication of records

2. Data clean-up. In the process identify Wikidata/FactGrid Q# for Geography, Biography, Institutions, Libraries, to the extent possible.

3. Definition of mapping between PhiloBiblon fields and "dataclips" (controlled vocabulary) and FactGrid P# and Q# (e.g., PhiloBiblon date of birth = FactGrid P77).

4. identification of PhiloBiblon entities with FactGrid Q entities (e.g., Fernando III, el santo, king of Castile and Leon = BETA bioid 1110 = BITAGAP bioid 2054 = BITECA bioid 6515 = FactGrid Q254531).

5. Import PhiloBiblon CSV records from Windows PhiloBiblon into FactGrid taking account of points 3 and 4.

6. Enrich records with data from external sources, e.g. VIAF.

Some of these steps can be carried out in parallel, i.e., Steps 1-4

The order in which entities should be imported into FactGrid needs to be decided. It seems clear that Geography should go first, because its data are used in Biography, Institutions, Libraries, Ms/Ed, and Uniform Title.

Institutions and Libraries should come next because the number of records to be imported is relatively small, and many of them will already have Wikidata Q# if not FactGrid Q#.

Biography (Persons) would come next because its records are needed for Analytic, Copies, Ms/Ed, and Uniform Title. There are some 40,000 Biography records in total, the vast majority of which will have neither Wikidata nor FactGrid records.

Uniform Title, Ms/Ed, Copies, and Analytic should follow, in that order.

This is in fact the same order set forth in the original Work Plan in the NEH proposal. It is worth noting that we are already starting on work Geography that in the Work Plan is scheduled for October.

--Charles Faulhaber (talk) 23:56, 13 June 2021 (CEST) (for Josep Maria Formentí)