FactGrid talk:PhiloBiblon: Difference between revisions

From FactGrid
Jump to navigation Jump to search
Line 465: Line 465:
:Charles: The nice thing about starting a project in open data is, that many things can be decided as collaborate work is in progress... In such project as yours you might have a look how Germania Sacra is organizing their bishoprics and if you find a better way, others might even learn from you. Same thing with manuscripts of all kinds. I personally deal with the sourcetype Album Amicorum/Liber amicitiae to get mobility profiles and to get 18ct social group profiles. They can be described as a bunch of manuscripts. There is a tendency by German libraries to describe them one by one single sheet. In doing so, you will lose all the information, which results from their connection in the album. That needs certainly discussion later on, since electronic data can be reorganized more easily... So you have to keep your data reorganizeable. In the field album amicorums we mostly have the first/original holder of the book, but it certainly could be of interest to see all the owners en route to the present archive... So I am curios to see, how others solve common problems. I think, you should consider creating model structures of typical data sets in FactGrid soon. This will make it easier for other Users here to enter a discussion with you and your team members.--[[User:Martin Gollasch|Martin Gollasch]] ([[User talk:Martin Gollasch|talk]]) 11:15, 29 August 2021 (CEST)
:Charles: The nice thing about starting a project in open data is, that many things can be decided as collaborate work is in progress... In such project as yours you might have a look how Germania Sacra is organizing their bishoprics and if you find a better way, others might even learn from you. Same thing with manuscripts of all kinds. I personally deal with the sourcetype Album Amicorum/Liber amicitiae to get mobility profiles and to get 18ct social group profiles. They can be described as a bunch of manuscripts. There is a tendency by German libraries to describe them one by one single sheet. In doing so, you will lose all the information, which results from their connection in the album. That needs certainly discussion later on, since electronic data can be reorganized more easily... So you have to keep your data reorganizeable. In the field album amicorums we mostly have the first/original holder of the book, but it certainly could be of interest to see all the owners en route to the present archive... So I am curios to see, how others solve common problems. I think, you should consider creating model structures of typical data sets in FactGrid soon. This will make it easier for other Users here to enter a discussion with you and your team members.--[[User:Martin Gollasch|Martin Gollasch]] ([[User talk:Martin Gollasch|talk]]) 11:15, 29 August 2021 (CEST)
That is precisely what we want to do. Olaf has suggested that we start with place names, because they are needed to establish relationships with people and with organizations. However, we are still struggling with the correlation between our ontology and FactGrid's properties. We do not want to create a Property that duplicates one that already exists in FactGrid. We've been looking at Wikidata to find models for the kinds of entities that we need and the Properties they use, e.g., WikiProject Books https://www.wikidata.org/wiki/Wikidata:WikiProject_Books. We understand that WikiData Properties and FactGrid Properties have different P#, but looking at the former will help us with the latter--[[User:Charles Faulhaber|Charles Faulhaber]] ([[User talk:Charles Faulhaber|talk]]) 08:01, 30 August 2021 (CEST)
That is precisely what we want to do. Olaf has suggested that we start with place names, because they are needed to establish relationships with people and with organizations. However, we are still struggling with the correlation between our ontology and FactGrid's properties. We do not want to create a Property that duplicates one that already exists in FactGrid. We've been looking at Wikidata to find models for the kinds of entities that we need and the Properties they use, e.g., WikiProject Books https://www.wikidata.org/wiki/Wikidata:WikiProject_Books. We understand that WikiData Properties and FactGrid Properties have different P#, but looking at the former will help us with the latter--[[User:Charles Faulhaber|Charles Faulhaber]] ([[User talk:Charles Faulhaber|talk]]) 08:01, 30 August 2021 (CEST)
:Sorry Charles, thats not what I meant. My suggestiom is, to take one of your old manuscripts, perhaps not the easiest one, and describe it in FactGrid with Items for all the needed persons, places and institutions you need for a proper and state of the art description. Sort of a case study... The mass import of places first is certainly a good decision, but I would like to propose to you to create one or two more or less complete models of your intended datasets parallel to the planned mass imports of data. The mass import is totally independent from my suggestion. In the German wikisource [https://de.wikisource.org/wiki/P%C3%A4pstin_Johanna Pope Joanne] is often referred to as a dataset which includes nearly every Problem you might have in modeling...--[[User:Martin Gollasch|Martin Gollasch]] ([[User talk:Martin Gollasch|talk]]) 09:25, 30 August 2021 (CEST)
:Sorry Charles, thats not what I meant. My suggestiom is, to take one of your old manuscripts, perhaps not the easiest one, and describe it in FactGrid with Items for all the needed persons, places and institutions you need for a proper and state of the art description. Sort of a case study... The mass import of places first is certainly a good decision, but I would like to propose to you to create one or two more or less complete models of your intended datasets parallel to the planned mass imports of data. The mass import is totally independent from my suggestion. In the German wikisource [https://de.wikisource.org/wiki/P%C3%A4pstin_Johanna Pope Joan] is often referred to as a dataset which includes nearly every Problem you might have in modeling...--[[User:Martin Gollasch|Martin Gollasch]] ([[User talk:Martin Gollasch|talk]]) 09:25, 30 August 2021 (CEST)

Revision as of 08:27, 30 August 2021

Further Project Pages

1st Web-Meeting, 18 May 2021

Here are the notes from the meeting 17 May 2021:

Philobiblon Meeting Notes

Dear Colleagues,

Here's what we have in the Workplan for this summer:

Objectives: Review of Wikibase software: standards, protocols, data formats, and implementations; comparison with PhiloBiblon schema and data dictionaries and current and desired functionalities. Scenarios to describe how contributors and users will interact with PhiloBiblon. Functional specifications for the features required for those scenarios. Identify a small set of particularly rich and related records (number TBD) from each of PhiloBiblon’s four databases (BETA, BIPA, BITAGAP, BITECA) and ten tables to serve as test cases. Ongoing data clean-up in legacy PhiloBiblon Tasks: T1. Review Wikibase software: standards, protocols, data formats, implementations (PI, Anderson, Formentí, Simons) T2. Develop user scenarios based on PhiloBiblon data and current and desired functionalities. (PI, academic staff, Adv. Board: Dagenais, Gullo) T3. Create functional specification for features needed to re-create these functionalities. (PI, Anderson, Formentí) T4. Identify set of test target records (PI, academic staff) T5. Ongoing clean-up of legacy data (PI, academic staff)

5/18/21 Meeting Notes

Attendance: Adam Anderson (zoom host), data analyst for project, Berkeley Lecturer Charles Faulhaber, Project PI, and (new) director of Bancroft Library, Josep Formenti (software engineer in Barcelona, also knows NLP and webdev), Olaf Simons, book historian at Uni Erfurt, Gothe. WikiMedia platform & FactGrid Randal Brandt, head of cataloging at UCB, rare books, Xavier Agenjo (director of projects for Fundación Ignacio Larramendi in Madrid, creator of more then 40digital libraries), Daniel Gullo (dgullo@csbsju.edu) director of collections, cataloging, creating databases of libraries, modern digital collections, controlled vocabulary for underrepresented religious traditions in the middle ages (vhimmel online database, NEH project director) Óscar Perea Rodriguez, lecturer at USF, working with Charles on this since 2002 Jason Kovari (cataloging rare books), Cornell Robert Sanderson, director of digital collections at Yale Cliff Lynch, director of a small nonprofit in DC Coalition for Networked Information, in the School of information at UC Berkeley; worked on the predecessor of the CDL. John May (phone): software dev., information management systems, designer of PhiloBiblon (software) John Dagenais: Professor of Spanish at UCLA, degree in library science, user of PhiloBiblon

PhiloBiblon Project: Goes back to 1975, as a spin-off of the Dictionary of the Old Spanish Language project at U of Wisconsin-Madison, a Spanish version of the OED. Based on contemporary uses of the language. To do this they created an in-house data base (1975 onward): Bibliography of Old Spanish Texts (BOOST), lineal ancestor of PhiloBiblon. Constant technology change. Put the collection on CD-OM discs by 1992, with digital images + text. 1994 with the internet (Netscape). Charles was teaching a course on DH computing (including gopher, OCR, etc). One of his students introduced him to the World Wide Web and he said that it wasn’t going to be important… Since then they’ve been working on keeping up with the different versions (1.0, 2.0, 3.0 = Linked Open Data with RDF). Currently Exports data from Windows PhiloBiblon into XML files to upload to the server at Berkeley, where XTF (eXtensible Text Framework), run from CDL takes large XML files from 9 PhiloBiblon tables (uniform-title, manuscripts/editions, persons, etc.) and parses each of these files into individual records for querying. Objective: to get us aligned with the Wikibase / Wiki-foundation, to piggyback on their data and technology moving forward.

Olaf Simons: FactGrid (works on Masonic institutions) Charles found Olof through commentary in their blogpost. Wikibase is the software behind the Wikidata project. It’s developed between 2012-ongoing. Used by national libraries worldwide to control data: creating triple-base statements--2 entities linked by a relation--(annotate with statements) and further develop metadata. You can see who is editing it by the minute. Each entity has a Q-number, e.g., for each person (gender, address, field of research, etc.) you can add as many statements as you want, and you can qualify these statements. You can add references to the entry (as many as you want). Used as a source for anything imaginable, we collect statements that become entries. Using SparQL you can query this: e.g. Illuminati: members list Q: date of birth? and it appears as a column for each member SparQL no longer connects text input fields. It normally connects items to items. E.g. a person is a member of a lodge, which is another item which contains informations with statements. Each item is a database object. We will have to translate input field information into objects, and for that to work, each object needs a statement. First Point: We don’t deal with books, instead you have people, places and institutions connected to these books. All of these need statements to work. Produce a network of items of relations. Consider the types of objects (e.g. Geographic names, Proper Nouns, etc.). We’ll need to get an idea of the number of items you’ll be creating beforehand. Should create your own team and get accounts for those members, along with managers of the accounts to run the team independently. Charles: For Spanish texts (6K texts) 8K individuals PhiloBiblon “Data clips” correspond to P properties in Wikibase.

Creating properties is not the issue. You create properties as needed and link them on a text-by-text basis.

The real problem is to understand the types of objects you have to create, and how they’re interconnected. They need to be created before you can link them.

E.g. often we start with places, and move from there. First is to create the objects, you then interconnect it.

Usually projects come to us with a spreadsheet…

Foreseeable Problems: The entire project is multi-lingual (using P-numbers, Q-numbers (Deutsche Nationalbibliothek GND)), which allow for different labels on these items and properties. You should use it in the language of your sources. The database currently accommodates French, English and German (the main users of the database). They put their labels in the database. You will need to make labels in Spanish and Portugese. Programming issues: the software is not designed to create presentable projects at the moment. This is ongoing work: 1) the FactGrid viewer, which creates pages based on information from the database, created on the fly. You will want something similar, for your own data. Bruno Belosst is the creator, you can contact him. This is where Wikibase is currently underdeveloped. A ‘Knowledge’ tool that is in development which can also show the metadata for each entity. Otherwise you get the SparQL query functionality.

Robert Sanderson to Everyone (12:32 PM) We used XTF at Getty for our archives: https://xtf.cdlib.org/ For what it’s worth, at Yale we’re doing this across the libraries, archives and museums. The current data is 45 million such entities, spanning 2.5 billion triples uses the same process at Yale. The difficulty is understanding the modeling and doing the transformation. The database side is essentially free. We have 15M MARC records, which get turned into 45M entities, people, places, concepts, objects, images, digital things. (8 classes) We’re not using Wikibase, but rather the main library standards by themselves The difference between the data modeling paradigms will necessitate the creation of new identifiers But wikibase is really good at managing external, typed identifiers :) We should write down the things you want to refer to.. e.g. ‘ownership’ (provenance in the museum and library world) if that’s something you want to refer independently to the object. From there you can work out the CSV tables and get them into the system Not to be a broken record, but the modeling also determines the possibilities for the queries. If you have (for example) place-written and date-written, on a text that can have multiple authors writing at different times and at different places, then you need some way to connect author / place and time *in the model*

Daniel Gullo: Biblissima is using this model.: https://data.biblissima.fr/w/Accueil question about Q-numbers… Warning: scholars use the MS-ids (manid), text-ids (texid), and other IDs established in the PhiloBiblon system.

Olaf: You will create external IDs (vital for places, e.g.) for any item in PhiloBiblon, and they will be linked to the IDs from PhiloBiblon. The Q-numbers are merged based on double entries (there’s movement, they are not deleted, but merged). There are ways of creating more key-value pairs without deleting IDs.

This is similar to the database we’re developing… When you’re migrating your data, you want to think of it hierarchically with a controlled vocabulary: 1) Continents, 2) Geographic names, 3) institutions, 4) other entities.

Jason Kovari Is the plan to assess what users really want? (or move forward with what you have?) Is part of the workplan assessing the work that you initially have? Jason Kovari (he/him) to Everyone (12:50 PM) The word I was forgetting earlier was Affinity. There is a Wikidata Affinity Group as part of the LD4 Community: https://www.wikidata.org/wiki/Wikidata:WikiProject_LD4_Wikidata_Affinity_Group

Charles: There’s an orthogonal relationship between the ‘library world’ and what we’re doing. PhiloBiblon has evolved over 35 years in response to the needs of the user community as represented primarily by the members of the four PhiloBiblon teams (for Spanish, Portuguese/Galician-Portuguese, Catalan, Golden Age poetry). To what extent do we need to accommodate what the library world has been doing? We’re not reimagining PhiloBiblon, but rather trying to move it into the web-based wiki world. We want to incorporate external authority files, such as Virtual International Authority file (VIAF), Getty Thesauri, Denis Muzarelle, Vocabulaire codicologique Rob and Olaf: Suggest starting with a small number of records (e.g. 2 or 3 codices based on your needs). This has been done. Choose complex items (10 of each sort) and go from there. E.g. items for each: Books (with Q-numbers for all the books) Toponyms Institutions etc.

What’s the process of getting PhiloBiblon into Wikibase? Use quick statements (or API) / spreadsheet into the machine. It takes a few hours to feed the data into the machine, but it’s simple. The benefit of a CSV import is you can use OpenRefine to augment the identifiers and reconcile data. How many items on the largest spreadsheet?

John May: (project janitor, coder) to perform rectifications and homogenization in the Windows database. It’s straightforward to export in CSV / spreadsheet. They are exceptionally large. The problem might be that the creation of Q and P numbers can be done programmatically, but that ontology will need to be made clear. Design desiderata: to accommodate ambiguity and uncertainty. So PhiloBiblon is made to be flexible and customizable. He’ll await the orders from Olof as to how he wants the data. The PhiloBiblon DBMS is an n-dimensional dynamic data model (which can be normalized as needed) ca. 60 mb for each bibliography (spreadsheet); no fixed field lengths; any structure / sub-sub-structure can contain structure - it’s all based on arrays… 6K texts 14K witnesses John can write the code and tag the texts with P / Q numbers that would correspond to the Wikibase PhiloBiblon is not a standard data model… so it will take some time to get the CSV…

How is PhiloBiblon used currently? Charles: the primary use of PhiloBiblon currently:

For editors of texts: How many Mss / printed editions are there of a given text? Descriptions of MSS There are lots of other potential queries based on data in PhiloBiblon, but it is currently difficult to extract it, e.g.:

Codicology: How many Mss that have gatherings of 12 leaves, and where do these Mss come from? Where and when are particularly catchword of signature types used in gatherings?

Prosopography: What individuals were active in a given location in a given time period

Oscar: there’s not a lot of interaction with PhiloBiblon, although some users do download forms from our Collaborate page, fill them out, and return them for incorporation into PhiloBiblon Xavier: demo of SparQL in a nice web editor from the Biblioteca Dixital Galiciana… (Olaf was impressed with the editor)

Olaf: FactGrid is a limited user database, so only users with accounts can change things. Everyone in the project will get an account. Users like the ones Charles describes can be given an account. There’s no central form for making ‘issues’, you just change things when you see a mistake. People who make a change add a reference about the change. You can keep mistaken information on the database to make sure people don’t change it back. It remains on the database ‘downgraded’ as mistaken information. Qualifiers are created for hypothetical statements. The public can see everything and do any search but it’s not open to the public. Once you have your data in qualified triples, it will be used by different users depending on their own interests..

Charles: Charles: We want to facilitate crowd-sourcing in order to tap into Hispanists all over Europe who can describe MSS in their local libraries and add tht information to PhiloBiblon directly.

Olaf: “spread accounts” - We can do his by giving everyone interested in contributing an account. Wikidata doesn’t have working hypotheses, they want facts, not theories. Whereas FactGrid allows for the type of knowledge (hypothetical, guess, needs to be substantiated) You make editorial control / blocking items… You can put items on a watch list and check who edits it. Otherwise everyone is allowed to edit and we watch the items as they get edited. Olof will give people accounts (based on email addresses, website / institutional titles), but he can also give admin accounts and show you how to create these accounts…

Charles: Would it be useful to add a field in PhiloBiblon for a WikiBase Q / P number? (as we correct our data),

Yes, it’s advisable FactGrid will show these for Wikidata numbers this will be an interactive process, back and forth, until we get it right…

John May: In the production of these spreadsheets, what are we trying to do? E.g. there are 200 cells (with structure within, including P-numbers, and Q-numbers). In a normalized table, we would break these up, but they are not in that format yet. How are P / Q numbers assigned? —— Tasks: Get familiar with FactGrid & Wikibase Adam will be working on linking the entities between PhiloBiblon and Wikibase. Get admin account from Olaf (so you can create accounts for the project) Take the framework and CSV which John May will make and work with Olaf, Josep, and Jason Once we have the CSV, contact Jason Kovari: questioning whether we need entities or just metadata for the text? Josep will work on obtaining the entities from the texts (see interface). Josep will work on making an API for easy input options Rob: We should write down the primary things you want to refer to.. e.g. ‘ownership’ (provenance in the museum and library world) if that’s something you want to refer independently to the object. From there you can work out the CSV tables and get them into the system Everyone: Communicate all problems on the project page, so anyone can see the discussion that led to certain solutions (so our models get adopted). Transparency is key to that to follow your line of thinking.

Dan Gullo's suggestions (17 May 2021) for MSS to use as test objects

When choosing records, I would suggest some basic criteria (I put this here before my suggestions).

Well-established authors that are found in multiple collections Well-established authors that are found in multiple languages Well-established authors that may be found outside of Spanish or Portuguese libraries Complex author-title relationships, that may involve a known translator and a known author Complex title-title relationships, perhaps a commentary on a known work. Complex titles with multiple variants, perhaps by language Works that are in print and manuscript Works with established bibliography

I would suggest Gonzalo de Berceo and known works to deal with a major author

  • BETA bioid 1211

I would suggest Isaac of Ninevah and the translations and printed editions of his works (serious issues to wrestle with here because of the need to reconcile your data so that one author moves from 3 ids in Philobiblon to one Qid in Wikibase)

  • BETA bioid 1186 BETA bioid 1398 BITAGAP bioid 1079

I would suggest Pseudo-Seneca for the same reasons as Isaac of Ninevah

  • BETA bioid 1192 BITAGAP bioid 1107 BITECA bioid 6379

For particularly interesting records to think about the complexity of data migration, here are three.

Very complex record

  • BETA manid 1567

Record which is not a complete manuscript, so how do you want to represent a part of a manuscript, and not a complete manuscript.

  • BITECA manid 2554

For a printed book with complex data

For institutions, you can use mine for fun, because you have it three time in Philobiblon, all with three different names.

BITECA libid 1128 BITAGAP libid 852 BETA libid 668


Could you please set direct inks into the Items? This is how it is done for MS: Toledo: Biblioteca Capitular, 43-13 (1344-03-04):

Test Items created in 2019

You can access al three objects also if you use the established IDs like "BETA manid 1106" as short cuts in the search field above. --Olaf Simons (talk) 16:01, 23 May 2021 (CEST)

Do links to PhiloBiblon have to be dynamic?

I am asking this because I tried to create an External identifier from FG to PB.

The external identifier could lead directly into the data set if I had a link with a stable environment and just the ID changing. I would replace the ID position in the URL with $1 and be able to link from the number into the PB item page... Seems that does not work, as I see you do not give any links into your own database...

Just by the way: sign all comments with ~~~~ - that creates automatic and dated signatures when you press save. --Olaf Simons (talk) 22:55, 19 May 2021 (CEST)

PhiloBiblon URLs are created dynamically, but they are stable. We link to them all the time from external web pages, such as in our blog. Typically we hide the URL strings within / underneath the text description. I note that when I copy such items to Wikidata the underlying link disappears and has to be added as a real URL. Thus BETA manid 1106 (Toledo: Biblioteca Capitular, 43-13 (https://pb.lib.berkeley.edu/xtf/servlet/org.cdlib.xtf.dynaXML.DynaXML?source=BETA/Display/1106BETA.MsEd.xml&style=MsEd.xsl%0A%0A%20%0A%20%0A%20&gobk=http%3A%2F%2Fpb.lib.berkeley.edu%2Fxtf%2Fservlet%2Forg.cdlib.xtf.crossQuery.CrossQuery%3Frmode%3Dphilobeta%26mstype%3DM%26everyone%3D%26city%3D%26library%3D%26shelfmark%3D13+43%26daterange%3D%26placeofprod%3D%26scribe%3D%26publisher%3D%26prevowner%3D%26assocname%3D%26subject%3D%26text-join%3Dand%26browseout%3Dmsed%26sort%3Dtitle)

Charles Faulhaber (talk) 22:16, 20 May 2021 (CEST)

It means that you cannot use the external identifiers as you spell them in our external identifiers. See the GND- or Google Properties and note the Formatter urls Property and how it is handled:
I have managed a Formatter URL on Q164503 (Biblioteca Capitular de Toledo) but I had to state "1106BETA.MsEd" to make that work, since "BETA manid 1106" does not occur in the URL that has to be created.

PhiloBiblon Project Page in multiple languages

I would like to add the Spanish, Catalan, and Portuguese versions of this basic description of PhiloBiblon 21:35, 20 May 2021 (CEST)


Two ways: You log in in the language which you want to add, and then you will also see the Properties that should be translated as well.

Option 2: You use the Quickstatements input. Make your Statements in Excel or on a Google Spreadsheets and then paste them into the QuickStatements field and use Version 1 input. This is the content for the three columns of a basic triple Statement that sets a Label, Description or Alias.

QNumber - Les - "Label in Spanish"
QNumber - Des - "Label in Spanish"
Qnumber - Aes - "Alias in Spanish"
QNumber - Lca - "Label in Catalan"
...

all the other Language codes: here.

Recommendable:

  • Create Items always with a Batch input (see the menu) with all the languages you need.
  • Work always in the language of your sources and translate Properties into your language while you are using the software (or when you feel you have nothing better to do - that's the day when you can translate all 600 Properties and their descriptions into Spanish and Catalan). --Olaf Simons (talk) 21:48, 20 May 2021 (CEST)

Linking pages to sub-pages

I created a new item:

BETA / Bibliografía Española de Textos Antiguos

Now I want to link it to the PhiloBiblon page as "part of".

It's obvious that the first thing I need to do is take the Wikidata tours: https://www.wikidata.org/wiki/Wikidata:Tours

Charles Faulhaber (talk) 21:58, 20 May 2021 (CEST)

If you mean you want to say the item is part of this page - don't. We are not in the Database, just on a Wiki page. But you could create an Item PhiloBiblon and you could then make that statement. --Olaf Simons (talk) 22:16, 20 May 2021 (CEST)
The best thing to do will be to create a PhiloBiblon research statement and to link that with the P131 property: "Research that contributed to this data set". The cool thing is here that you can connect this "research statement" to complex information about the team and publications that are connected to this. This will also be the easy way to ask for all database items of your project. --Olaf Simons (talk) 16:06, 23 May 2021 (CEST)

Adam Anderson (talk) 02:33, 26 May 2021 (CEST) The SparQL no longer connects text input fields. It normally connects items to items. E.g. a person is a member of a lodge, which is another item which contains information with statements. Each item is a database object.

Advice for beginners

I have been doing some little experiments on the FactGrid PhiloBiblon page, with much help from Olaf. We're keeping all of our discussions there. One thing he told me of importance is that you should sign all of your interventions with with four tildes, as I am doing here. You will see them at the bottom when you edit this page. When you look at the non-editable page you will see "Charles Faulhaber (talk) 22:07, 20 May 2021 (CEST)" A little learning is a dangerous thing, so I'm going to take some time to look at the various Wikidata: Tours (https://www.wikidata.org/wiki/Wikidata:Tours )before I dive back into this. Charles Faulhaber (talk) 22:10, 20 May 2021 (CEST)

Recommended also: our help section. It might be more to the point on FactGrid, and it will profit from users who reformulate things as they would have needed them. (Also welcome: language improvements, wherever the German user has failed.) --Olaf Simons (talk) 11:09, 21 May 2021 (CEST)

Project Dimensions

Present stats:

TABLA BETA BITAGAP BITECA BIPA Total
ANALYTIC (witnesses) 14692 52084 12239 89926 168,941
REFERENCES 7270 21558 5976 472 35,276
PERSONS 7423 32309 3473 3418 46,623
GEOGRAPHY 1814 4759 840 211 7,624*
INSTITUTIONS 794 3297 585 4 4,680
LIBRARIES 915 455 420 119 1,909
MANUSCRIPTS & IMPRINTS 5168 5886 1971 1572 14,597
COPIES OF PRINTED BOOKS 4157 1146 1473 137 6,913
SUBJECT HEADINGS 339 34 149 126 648
WORKS (texts) 6034 31962 6173 89913 134,082
TOTAL 48606 153490 33299 185898 421293

* Places will definitely recommend an input of all Spanish places (deserted villages to cities, some 13,000) from sources that come with external identifiers and with geographic coordinates. --Olaf Simons (talk) 11:46, 21 May 2021 (CEST)

The best way to start the input is probably 1: all places, 2: all libraries, 3: All institutions... - as these things will work without your texts and codices, while the texts and codices will not work without these. Another approach would be to run a first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers as fast as possible. Once you have the matches you can run all subsequent inputs with greater ease. --Olaf Simons (talk) 16:15, 23 May 2021 (CEST)

In fact, this is what we had suggested doing as the first steps, starting with these three entities, primarily because they are the least complex in our data model. Although, as you point out, they have fewer dependencies. Toponyms have internal dependencies. Thus, cities belong to states or provinces ... --Charles Faulhaber (talk) 20:36, 23 May 2021 (CEST)

Running "first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers" is a wonderful idea. Needless to say I don't knw what that implies. Presumably our programmer John May will have to generate a list of PB identifiers. For every record we have a "moniker," a constructed field that serves to disambiguate similar records. Thus the moniker for persons consists of First and Last Name, followed by the person's title and dates, or some other identifying information. --Charles Faulhaber (talk) 02:51, 26 May 2021 (CEST)

Input efficiency: how to map controlled vocablary to P#/Q#

This Friday 1800 CST / 0900 PDT / 1200 EDT works for me. I've also invited Josep Formentí, who is in Barcelona, who has been looking at the FactGrid api.

We are currently focusing on cleaning up and coordinating our data in the four PhiloBiblon bibliographies.

Database designer John May is experimenting with CSV exports of data from the Biography/Persons table to allow us to manipulate it more easily to find errors, inconsistencies, and duplicates. Josep just mentioned to me a deduplication library. This is something that may be useful once we're a little farther in to the process.

In addition to Adam's questions I want to follow up with Olaf on two issues:1. What's the most efficient way to compare our lists of controlled vocabulary with existing FactGrid P# and Q#?2. How would we "run a first input simply on Labels, (provisional) Descriptions and P2 information in order to get the grid of FG Q-Numbers and their matches of PB Identifiers as fast as possible"?3. How do you log in to FactGrid in Spanish?

Yashila, glad to meet you,

-- Charles Faulhaber (talk) 00:57, 10 June 2021 (CEST)

Just briefly:
  • You can only use the database's wiki in Spanish if you have an account. Your account opens the menu top left with the "preferences" link that allows you to configure FactGrid. All people connected to this project should have accounts.
  • The entire property vocabulary is in the FactGrid:Directory of Properties. We should get a Spanish version of this structure in the course of this project. What we need here is English/Spanish speakers who go through all the 613 Properties top down. They all need Spanish Labels and descriptions to work. Google translate helps. Maybe this is a job for a student assistant if you have one, maybe it is interesting work anyone as it will make you familiar with the vocabulary.
  • The page for collective data modelling on Codices should be this one FactGrid:Data model for manuscripts. Do not hesitate to edit it, insert questions where you feel you need more clarity, restructure it as you need. All Wiki pages have a version history that allows us to compare states this pages has been in, nothing can get lost, nothing can be destroyed. The FactGrid player who created it is User:Marco Heiles, and he should join you on data modelling debates as he is presently experimenting with some 200 codices of his PhD thesis. You are not bound to his decisions since we are using triples - so you can have any triple you want without affecting anyone on board. But we gain project cohesion if we find a language of triples that works for all. The names of the properties are irrelevant, the important thing is that we use them the same way.
  • Duplicates in biographies: We should avoid all duplicates by setting Wikidata reference numbers wherever a Wikidata-Item exists. The cool thing about the Wikidata reference is that you cannot create two items on FactGrid with the same Wikidata reference number. Question for John May is then: can you map your names against Wikidata?
  • How to run a first input that just creates all the objects, so that we have a list of FactGrid-Q-Numbers on corresponding PhiloBiblon IDs. We do the first input via CSV. This is a first input I prepared with a colleague - not yet in the machine: Google Spreadsheet a complex first input. If you go for a simpler input you should still have:
  • Labels in as many languages as you require (English/ French/ Spanish/ Portuguese (?)/ German)
  • Descriptions at least in English, FactGrid's default language.
  • A P2 (what is it?) statement on all items that flow in (a codex, a human being, a place...)
  • Maybe the PhiloBiblon ID if that helps you to to organise the matches.
  • Question on my side: is there a Spanish register of all places from villages to cities? Could we match that with Wikidata and PhiloBiblon places in order to make sure that we create a unified geographical data environment that won't require us to create and to geo-reference ever new places at later stages?
It might be good to create a Spanish Help Section on FG and I'd recommend that all new users take a brief tour through the English help section. Clarify things where I did not see the difficulties, demand clarification where I failed. New users are the best to do that as only they have the perspective to see the deficits in my/our explanations. --Olaf Simons (talk) 09:03, 10 June 2021 (CEST)

Spanish and Portuguese places

  • We will get all Spanish places with two Wikidata searches: All INE pace codes P772 (some 14,000 places) and a second search all villages (to grab small things that escaped the code).
  • There is a complementary Wikidata property P6324 for all Portuguese places (some 3500), plus 182 villages

The thing to consider is: what else to grab on the same searches - as it will be easy to import useful data on that move. It will be worth to take a look at the Wikidata-items to round up a shopping list of data to grab. The entire input will be about 1300. (Less than the French input, more that the Swedish - easy to handle). --Olaf Simons (talk) 23:10, 13 June 2021 (CEST)

There are a probably fewer than 100 places in Italy as well as small numbers from most other European countries, from the U.S., and from Latin America.
We can locate these manually in Wikidata prior to export. --Charles Faulhaber (talk) 23:19, 13 June 2021 (CEST)
P635 is the Property for all Italian municipalities (and again one will add "villages" and deserted places to get beyond the Government's office of statistics. My recommendation is to do an all Italian places input just with the Spanish and Portuguese places as that will save time (the manual work is the same whether it is done on 100 or 10.000. I'll show you, how it is done as soon as you feel it is time. --Olaf Simons (talk) 00:04, 14 June 2021 (CEST)
* just pro memoria my Wikidata search on INE Entities (Spanish)
* the complementary Portuguese List (have to add the dates for population....)


The last link gives (under "Basic Geographical Nomenclature of Spain" the list of NGBE place names., SGR: ETRS89 in the Peninsula, Balearic Islands, Ceuta and Melilla, and REGCAN95 in the Canary Islands (both systems compatible with WGS84). Longitude and latitude coordinates and UTM in its corresponding zone for all of Spain, csv or access database) a file of 1,048,575 lines... (unrealistic to work with) --Olaf Simons (talk) 09:29, 25 June 2021 (CEST)
Hi Olaf,
I've been looking at the Getty Thesaurus of Geographical Names <http://www.getty.edu/research/tools/vocabularies/tgn/index.html> as a possible input source for FactGridPhiloBiblon. Here's what the webset says about exporting data:
- *LOD formats:* Users may acquire the full data for AAT, TGN, and ULAN, in JSON, RDF, N3/Turtle, and N-Triples through our Linked Open Data (LOD) project <http://www.getty.edu/research/tools/vocabularies/lod/index.html>. Data is refreshed monthly.
Am I correct in assuming that we would want it as N-Triples?
The Getty vocabularies include a lot of other terminology that would be useful for PhiloBiblon, e.g. the Art & Architecture thesaurus.
Best,
Charles

I am less concerned about the format (that can be converted until we have it in a table format), than about the practicability of the set.

This one was your last proposals:

https://centrodedescargas.cnig.es/CentroDescargas/catalogo.do?Serie=NGBES

it creates a list of 1,048,575 lines. If you grab all Iberian villages, towns and cities (as I proposed) that will just be some 20.000 - and you again will not use half of these (if I read the number of four overlapping databases correctly), you will rather stand at 10-20% of my 20,000 proposal.

What is more important: you want external identifiers. I have not looked into the Getty dataset but you should consider two things:

You want a handful of identifiers - the official Spanish/Portuguese ones should be in there - and Wikidata should be among them as that is now the most frequent identifier linking to other identifiers and easy to use.

And you want a CC0-license. Any other License is to be avoided because we might quote Getty for "their" data. But we do not want to make sure that those who will use our data will quote Getty for it (and that is what their license wants us to do in a strict legal sense). I am not going to run after people who download all Spanish places from FactGrid and who forget to state that these are Getty data, CC-by licensed and to be handled accordingly.

So look at the links I gave you above.

They create the realistic set of all known settlements, they are CC0, we do not have to quote anyone on them and can use them on the same free license.

If Getty has identifiers which you feel we should also have (and which are not in the Wikidata set), then let us match that set to get these specific identifiers as well onto our set.

Take a look into both Wikidata queries and tell me what superiority Getty has, so that I get that line of data on an additional property.

So far I do not understand what Identifiers you actually need in order to map your data.

--Charles Faulhaber (talk) 07:13, 2 July 2021 (CEST) (from Olaf Simons e-mail)
Most of our data will not have any identifiers (e.g., Wikidata). I was hoping that we could create labels from the textual data (placenames + province) that could match to the Wikidata labels previously imported. One of our advisory board members is rob Sanderson, who used to be the Getty's data science architect. I have a query in to him. I understand your point about the massive size of the Spanish dataset. I think that Wikidata will work perfectly well as a first step for placenames.
At some point--much later on--I will want to consult with the Getty about their other ontologies, like the Art & Architecture Thesaurus, which has much of interest for the description of MSS--Charles Faulhaber (talk) 07:13, 2 July 2021 (CEST)
I have just created the Property for Getty Identifiers Property:P624 - Wikidata, so it turns out has the property but not well supported. We'l have to find out how we map our places on the Getty Numbers, but it will be good to have the additional feature. --Olaf Simons (talk) 12:28, 2 July 2021 (CEST)

First steps

For each data type = entity = PhiloBiblon table (Analytic, Bibliography, Biography, Copies, Geography, Institutins, Libraries, Ms/Ed, Subjects, Uniform Title) the following steps need to be done

1. Deduplication of records

2. Data clean-up. In the process identify Wikidata/FactGrid Q# for Geography, Biography, Institutions, Libraries, to the extent possible.

3. Definition of mapping between PhiloBiblon fields and "dataclips" (controlled vocabulary) and FactGrid P# and Q# (e.g., PhiloBiblon date of birth = FactGrid P77).

4. identification of PhiloBiblon entities with FactGrid Q entities (e.g., Fernando III, el santo, king of Castile and Leon = BETA bioid 1110 = BITAGAP bioid 2054 = BITECA bioid 6515 = FactGrid Q254531).

5. Import PhiloBiblon CSV records from Windows PhiloBiblon into FactGrid taking account of points 3 and 4.

6. Enrich records with data from external sources, e.g. VIAF.

Some of these steps can be carried out in parallel, i.e., Steps 1-4

The order in which entities should be imported into FactGrid needs to be decided. It seems clear that Geography should go first, because its data are used in Biography, Institutions, Libraries, Ms/Ed, and Uniform Title.

Institutions and Libraries should come next because the number of records to be imported is relatively small, and many of them will already have Wikidata Q# if not FactGrid Q#.

Biography (Persons) would come next because its records are needed for Analytic, Copies, Ms/Ed, and Uniform Title. There are some 40,000 Biography records in total, the vast majority of which will have neither Wikidata nor FactGrid records.

Uniform Title, Ms/Ed, Copies, and Analytic should follow, in that order.

This is in fact the same order set forth in the original Work Plan in the NEH proposal. It is worth noting that we are already starting on work Geography that in the Work Plan is scheduled for October.

--Charles Faulhaber (talk) 23:56, 13 June 2021 (CEST) (for Josep Maria Formentí)

Questions for webex sessions

1. How do you keep together separate statements that form a data unit? For example, BETA bioid 1007 shows the following information for the date and place of birth of king Alfonso X:

Toledo 1221-11-23 (DB~e)

Madrid 1221-11-26 (Alvar 2010)

These represent two different opinions.

The source for the first one is the Diccionario Bibliográfico Español. The source for the second is a 2010 article by Carlos Alvar.

We do not want to have the two place of birth statements together followed by the two date of birth statements, since there is then no way to associate the 1221-11-23 date with Madrid and the 1221-11-26 date with Toledo.

--Charles Faulhaber (talk) 06:58, 26 July 2021 (CEST)

Wikibase allows any number of statements on the same question. Four dates of birth next to each other are no problem at all. We give them with their respective different sources and different notes why this should be the date. If we can rule out a particular date we still take it and downvote it with a note, so that there is no fall back into an error. If we know that all dates are right (e.g. different successive numbers of population counts) we can prefer the latest date with the wish that only they appear in searches. --Olaf Simons (talk) 08:19, 26 July 2021 (CEST)
For discussion purposes a very short "Alfonso X" in FactGrid without downvotes nor preferences at your service: https://database.factgrid.de/wiki/Item:Q266835 You can have a look at the corresponding wikidata-sheet (interlink at the bottom of the page) how they solve the problem there (its the same software).--Martin Gollasch (talk) 09:41, 26 July 2021 (CEST)
The problem is not the multiple dates of birth but rather associating a specific date of birth with a specific place of birth rather than adding all of the date of birth statements followed by all the place of birth statements. What you show in the linked FactGrid Q266835 is precisely what we don't want. It breaks the association between date and place, leaving it up to the reader to guess that the first date is associated with the first place and the second with the second place. This kind of association is fundamental to the structure of PhiloBIblon--Charles Faulhaber (talk) 06:47, 30 July 2021 (CEST)
Well, you could link both statement,shifting either of te two linked statements into the qualifier (and you can have as any qualifiers as you want). But you will actually like the option to separate thise statements since you will not always have the lined statement. --Olaf Simons (talk) 07:17, 30 July 2021 (CEST)
One thing is database content, the other thing is, what does a reader/user see in the browser later on. I made some changes (gave preference for Carlos Alvar 2010) and we will see tomorrow at the latest how the database content is shown in the beta version of the FactGrid Viewer as an example. Right now there are two lines with equal values. You find the available Browsers/Viewers on the left of your screen. I guess, this viewer does not show the preference... Actually it should be no problem, to find a viewer, which is suitable for you. The actual question is: Do you have a database problem or a viewer problem?--Martin Gollasch (talk) 10:11, 30 July 2021 (CEST)
I regret the delay in responding. Martin, you have solved the problem in my opinion. This is not simply a browser issue; it is in fact a database problem, how to maintain the relationship among the various elements of a data structure, in this case place of birth / date of birth/ source. Here there are only two elements in the data structure. We have cases in which there are six or eight related elements that need to be kept together, for example the former owner of a manuscript, the date and place he acquired it, and the price he paid, and the source of the information. --Charles Faulhaber (talk) 07:04, 29 August 2021 (CEST)
Charles: The nice thing about starting a project in open data is, that many things can be decided as collaborate work is in progress... In such project as yours you might have a look how Germania Sacra is organizing their bishoprics and if you find a better way, others might even learn from you. Same thing with manuscripts of all kinds. I personally deal with the sourcetype Album Amicorum/Liber amicitiae to get mobility profiles and to get 18ct social group profiles. They can be described as a bunch of manuscripts. There is a tendency by German libraries to describe them one by one single sheet. In doing so, you will lose all the information, which results from their connection in the album. That needs certainly discussion later on, since electronic data can be reorganized more easily... So you have to keep your data reorganizeable. In the field album amicorums we mostly have the first/original holder of the book, but it certainly could be of interest to see all the owners en route to the present archive... So I am curios to see, how others solve common problems. I think, you should consider creating model structures of typical data sets in FactGrid soon. This will make it easier for other Users here to enter a discussion with you and your team members.--Martin Gollasch (talk) 11:15, 29 August 2021 (CEST)

That is precisely what we want to do. Olaf has suggested that we start with place names, because they are needed to establish relationships with people and with organizations. However, we are still struggling with the correlation between our ontology and FactGrid's properties. We do not want to create a Property that duplicates one that already exists in FactGrid. We've been looking at Wikidata to find models for the kinds of entities that we need and the Properties they use, e.g., WikiProject Books https://www.wikidata.org/wiki/Wikidata:WikiProject_Books. We understand that WikiData Properties and FactGrid Properties have different P#, but looking at the former will help us with the latter--Charles Faulhaber (talk) 08:01, 30 August 2021 (CEST)

Sorry Charles, thats not what I meant. My suggestiom is, to take one of your old manuscripts, perhaps not the easiest one, and describe it in FactGrid with Items for all the needed persons, places and institutions you need for a proper and state of the art description. Sort of a case study... The mass import of places first is certainly a good decision, but I would like to propose to you to create one or two more or less complete models of your intended datasets parallel to the planned mass imports of data. The mass import is totally independent from my suggestion. In the German wikisource Pope Joan is often referred to as a dataset which includes nearly every Problem you might have in modeling...--Martin Gollasch (talk) 09:25, 30 August 2021 (CEST)