Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[don't merge] Switch DOI to DOI.org #1135

Open
wants to merge 13 commits into
base: master
from

Conversation

Projects
None yet
4 participants
@adam3smith
Copy link
Collaborator

adam3smith commented Sep 4, 2016

@zuphilip -- this obviously needs cleanup, but I wanted to get your
thoughts on this. It's taking advantage of the now available CSL_JSON
via content negotiation on doi.org (and the functionality you added to
ZU.doGet to use it. The main advantages are

  1. Fewer API requests
  2. Works in Chrome connector for items with DataCite and CrossRef DOIs
  3. Will work with additional services as they become available

Open questions:

  1. What's a good name for the new translator
  2. Should we remove the old DataCite and CrossRef ones, or leave them in
    as a fallback at least initially?
  3. Any thoughts on how to test this for data quality? I was going to
    manually go through all DOIs that are in the CrossRef translator. Any
    other ideas?

As well as any other conceptual issues you can think of (as I say, don't
worry about code quality for now)

[don't merge] Switch DOI to DOI.org
@zuphilip -- this obviously needs cleanup, but I wanted to get your
thoughts on this. It's taking advantage of the now available CSL_JSON
via content negotiation on doi.org (and the functionality you added to
ZU.doGet to use it. The main advantages are
1) Fewer API requests
2) Works in Chrome connector for items with DataCite and CrossRef DOIs
3) Will work with additional services as they become available

Open questions:
1) What's a good name for the new translator
2) Should we remove the old DataCite and CrossRef ones, or leave them in
as a fallback at least initially?
3) Any thoughts on how to test this for data quality? I was going to
manually go through all DOIs that are in the CrossRef translator. Any
other ideas?

As well as any other conceptual issues you can think of (as I say, don't
worry about code quality for now)
@zuphilip

This comment has been minimized.

Copy link
Collaborator

zuphilip commented Sep 4, 2016

Cool! I also wanted to look at this anyway at some point...

At doi.org the request will only be forwarded to the correct agency with the same header request and only if this agency can handle this header request, we will receive CSL JSON back. Otherwise we might be redirected to some landing page in HTML. Besides CrossRef and DataCite also mEDRA should support content-negation with CSL JSON header, see table at http://www.crosscite.org/cn/.

  1. What's a good name for the new translator

I like DOI Content Negotiation. 👍

It is a search translator and can thereby also been used for the doi search from Zotero itself, i.e. the "magic" wand. I don't know if this effects the discussion here.

  1. Should we remove the old DataCite and CrossRef ones, or leave them in
    as a fallback at least initially?

If everything works out fine, then we should not (strictly) need the single search translators anymore. However, this feature is only available from Zotero v4.0.29.11 and therefore anyone running with an older version would be negatively affected by an deletion of the two translators. Maybe we can simple increase the priority numbers of those?

  1. Any thoughts on how to test this for data quality? I was going to
    manually go through all DOIs that are in the CrossRef translator. Any
    other ideas?

You can create a gist with the examples and then test the web translator (which will implicitly test the search translator), e.g. https://gist.github.com/zuphilip/9e5101219f453964eb7e . It would also be good to have more information when the translation is failing for the users, i.e. the things we see at https://repo.zotero.org/errors.


a) Did you test this translator with Chrome connector already?
b) Do you know if there are any other agency providing metadata in some format over content negotiation or planning to do? Or can we convince them to do this?
c) I don't know what exactly the consequences performance-wise are. I guess that it should be better performance for all working agencies, but how about the performance of dois from other agencies?
d) Maybe, you can try to extend the function filterQuery(items, checkOnly) in a similar way as we have getSearchResults and thereby also using it in detectWeb (there seems to be quite some overlap of lines).
e) It would be nice to indicate which agency actually provided the metadata, but I don't know if we have this information somewhere...

Let me know if I can help you further.

Some cleanup
This now works decently well
@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Sep 8, 2016

To your questions:
a) No -- Standalone is actually not on 4.0.29.11 yet, so would have to force Firefox into server mode and might as well wait then.
b) just the three listed on http://www.crosscite.org/cn/
c) The only agency not covered explicitly for which we currently have a translator is Airiti.
d)OK, will do
e) CrossRef actually puts this in the data (it says source: CrossRef in the JSON)

More generally, in doing some quality testing, I'm finding that metadata for books and book chapters in CrossRef are of much lower quality in the CSL JSON than in their UnixRef format. I have contacted them about this and will see what they say. If improvements are not in the card soon, we can still handle all of this in a single translator using content negotation: we can specify an order of preferred formats in the accept header and can just put UnixRef and Datacite XML in there, then check which we get (which should be easy) and run the respective parsing code.

item.genre = item['container-title']
}
if (item.type != "article-journal" && item.type != "paper-conference") {
item.note = "DOI: " + item.DOI;

This comment has been minimized.

@zuphilip

zuphilip Jun 4, 2017

Collaborator

This should actually go into extra field.

@zuphilip

This comment has been minimized.

Copy link
Collaborator

zuphilip commented Jan 1, 2018

More generally, in doing some quality testing, I'm finding that metadata for books and book chapters in CrossRef are of much lower quality in the CSL JSON than in their UnixRef format. I have contacted them about this and will see what they say.

@adam3smith Any update on this matter?

Morveover, testing in 5.0 would be needed now...

@adam3smith And just checking: Is this change still what we want?

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Jan 1, 2018

I definitely still want to do this, yes, and I'd be surprised if it didnt' work in 5.0.
The biggest issue is that last I checked we still had the metadata quality issue with CrossRef. We can work around this and still use DOI content negotiation (you can sequence preferred formats and we could distinguish them easily) but obviously it'd be nice to be able to not worry about the UnixRef parser at all any more. I'll look at this again.

@dstillman

This comment has been minimized.

Copy link
Member

dstillman commented Feb 26, 2018

@adam3smith: Somewhat tangential, but just to understand this better, for the Zenodo example you gave in the forums, 10.5281/zenodo.889811, it seems the data from content negotiation is different from the metadata available on zenodo.org, even in the same format:

curl -LH "Accept: application/vnd.citationstyles.csl+json" https://doi.org/10.5281/zenodo.889811

{
  "type": "report",
  "id": "https://doi.org/10.5281/zenodo.889811",
  "categories": "qualitative methods",
  "author": [
    {
      "family": "Buthe",
      "given": "Tim"
    },
    {
      "family": "Jacobs",
      "given": "Alan"
    }
  ],
  "issued": {
    "date-parts": [
      [
        2015,
        9,
        30
      ]
    ]
  },
  "abstract": "In this issue, we are delighted to present a symposium, inspired by a panel at the 2015 APSA Annual Meeting, that explores a range of innovative linkages between the ventures of interpretation and causal inference. In remarkably diverse ways, the essays in this collection propose approaches to empirical research that combine tools and logics of causal inferences with techniques and understandings typically associated with interpretive inquiry. Our contributors make the case, respectively, for joining cross-case comparison to ethnography, interpretation to process tracing, ethnographic fieldwork to social network analysis, and interpretive discourse analysis to the quantitative measurement of identity. As Guest Editor of the symposium, Edward Schatz both introduces the pieces and offers a concluding synthesis that unpacks a set of conceptual ambiguities and tensions with which positivist-interpretivist bridge-building efforts must grapple.",
  "DOI": "10.5281/zenodo.889811",
  "page": "-",
  "publisher": "Zenodo",
  "title": "Letter From The Editors"
}

vs.

https://zenodo.org/record/889811/export/csl

{
  "DOI": "10.5281/zenodo.889811", 
  "abstract": "<p>In this issue, we are delighted to present a symposium, inspired by a panel at the 2015 APSA Annual Meeting, that explores a range of innovative linkages between the ventures of interpretation and causal inference. In remarkably diverse ways, the essays in this collection propose approaches to empirical research that combine tools and logics of causal inferences with techniques and understandings typically associated with interpretive inquiry. Our contributors make the case, respectively, for joining cross-case comparison to ethnography, interpretation to process tracing, ethnographic fieldwork to social network analysis, and interpretive discourse analysis to the quantitative measurement of identity. As Guest Editor of the symposium, Edward Schatz both introduces the pieces and offers a concluding synthesis that unpacks a set of conceptual ambiguities and tensions with which positivist-interpretivist bridge-building efforts must grapple.\u00a0</p>", 
  "author": [
    {
      "family": "Buthe, Tim"
    }, 
    {
      "family": "Jacobs, Alan"
    }
  ], 
  "container_title": "Qualitative & Multi-Method Research", 
  "id": "889811", 
  "issue": "2", 
  "issued": {
    "date-parts": [
      [
        2015, 
        9, 
        30
      ]
    ]
  }, 
  "page": "2", 
  "publisher": "Zenodo", 
  "title": "Letter from the editors", 
  "type": "article-journal", 
  "volume": "13"
}

Seems like the former is preprint data that was never updated? Is that common?

I was looking because I was curious how complete CSL-JSON data is compared to other formats. In this case that's not relevant, since they're both CSL-JSON, but are there fields we might care about that don't exist in CSL-JSON? (I didn't figure out any other formats that worked via CN for that DOI anyway, so perhaps irrelevant…)

@zuphilip

This comment has been minimized.

Copy link
Collaborator

zuphilip commented Feb 26, 2018

Seems like the former is preprint data that was never updated?

I don't think this is what you see here. I guess that the difference comes from the DataCite metadata schema which misses some details which are commons for journal articles, e.g. no journal title, volume, issue, pages.

If you start with less information to create a CSL-JSON at DataCite than it understandable that this results in a different CSL-JSON than when you start with more information at Zenodo.

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Feb 26, 2018

yes, @zuphilip has that exactly right -- this is caused by back and forth translation to/from an unsuitable schema. I've been talking to DataCite and they're very likely going to update their schema to better accommodate articles in the future.

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Feb 26, 2018

You can see the full entry into the datacite catalog using curl -LH "Accept: application/vnd.datacite.datacite+json" https://doi.org/10.5281/zenodo.889811

  "id": "https://doi.org/10.5281/zenodo.889811",
  "doi": "10.5281/zenodo.889811",
  "creator": [
    {
      "type": "Person",
      "name": "Tim Buthe",
      "givenName": "Tim",
      "familyName": "Buthe"
    },
    {
      "type": "Person",
      "name": "Alan Jacobs",
      "givenName": "Alan",
      "familyName": "Jacobs"
    }
  ],
  "title": "Letter From The Editors",
  "publisher": "Zenodo",
  "resource_type_general": "Text",
  "resource_type": "Journal article",
  "subject": [
    [
      "qualitative methods"
    ]
  ],
  "date_published": "2015-09-30",
  "date_registered": "2018-01-03T19:25:14.000Z",
  "date_updated": "2018-01-10T16:45:14.000Z",
  "publication_year": 2015,
  "alternate_identifier": {
    "type": "url",
    "name": "https://zenodo.org/record/10.5281/zenodo.889811"
  },
  "rights": [
    {
      "id": "https://creativecommons.org/licenses/by-nc-nd/4.0",
      "name": "Creative Commons Attribution-NonCommercial-NoDerivatives"
    },
    {
      "name": "Open Access"
    }
  ],
  "description": {
    "type": "Abstract",
    "text": "In this issue, we are delighted to present a symposium, inspired by a panel at the 2015 APSA Annual Meeting, that explores a range of innovative linkages between the ventures of interpretation and causal inference. In remarkably diverse ways, the essays in this collection propose approaches to empirical research that combine tools and logics of causal inferences with techniques and understandings typically associated with interpretive inquiry. Our contributors make the case, respectively, for joining cross-case comparison to ethnography, interpretation to process tracing, ethnographic fieldwork to social network analysis, and interpretive discourse analysis to the quantitative measurement of identity. As Guest Editor of the symposium, Edward Schatz both introduces the pieces and offers a concluding synthesis that unpacks a set of conceptual ambiguities and tensions with which positivist-interpretivist bridge-building efforts must grapple."
  },
  "schemaVersion": "http://datacite.org/schema/kernel-3",
  "provider_id": "CERN",
  "client_id": "CERN.ZENODO",
  "provider": "DataCite"
}

The obvious things we're not getting from CSL JSON are subject (tags) and license (rights)

@adam3smith adam3smith referenced this pull request Apr 12, 2018

Merged

CrossRef REST #1618

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Jan 24, 2019

Reviewing with @zuphilip -- we're currently thinking that to get the best result, we'll want to

  1. Specify specific formats (CrossRef, DataCite), both of which are more expressive (and complete) than CSL JSON. We have a CrossRef UnixRef importers (one in the REST the other one in the standard CrossRef; we'll likely take the regualr CrossRef one because that's for the XML). We'd need to write one for DataCite JSON.

  2. Then fall back to CSL JSON

This will currently work for mEDRA, Japan Link Center, and Kistic. crosscite.org suggests it should also work for ISTIC, but that's currently not the case.

Additional idea for the longer term would be to check if the EU publication office (which doesn't support CN) has a very limited set of prefixes. If that's the case, we could follow the URL and import from the landing page which has OK metadata. Not implementing this isn't a showstopper -- we currently fail on OP DOIs, too.

@zuphilip

This comment has been minimized.

Copy link
Collaborator

zuphilip commented Jan 24, 2019

For the call URL we can use something like

curl -LH "Accept: application/vnd.crossref.unixref+xml, application/vnd.datacite.datacite+json, application/vnd.citationstyles.csl+json" https://doi.org/10.5281/zenodo.11813

and for differentiation of the different JSON formats, DataCite includes

 "schemaVersion": "http://datacite.org/schema/kernel-4",

which CSL JSON does not.

Show resolved Hide resolved DOI.js

adam3smith and others added some commits Jan 26, 2019

Actually add license
Delete Datacite & Crossref
@zuphilip
Copy link
Collaborator

zuphilip left a comment

These are some comments about the code to clean it up. Some easier things I suggested directly in my PR adam3smith#11 to your branch. In general this looks good. This PR now depends that we first merge #1812 or merge that first into this branch. I only tested a few cases and try to do some more tests, but they work fine.

Show resolved Hide resolved deleted.txt Outdated
Show resolved Hide resolved DataCite.js
Show resolved Hide resolved DOI Content Negotiations.js Outdated
Z.debug(item)
if (item.itemType == "report") {
item.reportType = item.publicationTitle;
}

This comment has been minimized.

@zuphilip

zuphilip Jan 27, 2019

Collaborator

Why do we do these fixes here? I would rather suggest to do these fixes in the CSL JSON translator... If possible then these three cases would just differentiate by the different translator we choose.

"url": "http://dx.doi.org/10.12763/ONA1045",
"DOI": "10.12763/ONA1045",
"date": "1734",
"libraryCatalog": "DataCite"

This comment has been minimized.

@zuphilip

zuphilip Jan 27, 2019

Collaborator

Update the test here.

You do this manually, right?

Handle cases where title is empty
E.g. 10.1023/A:1005640604267 or 10.4086/cjtcs.2012.002
@zuphilip

This comment has been minimized.

Copy link
Collaborator

zuphilip commented Jan 28, 2019

Okay, I tested with 140 Crossref DOIs and 13 Datacite DOIs found in My Library. All Datacite DOIs are working, but two Crossref DOIs failed because the title was empty (fixed in PR) and two other DOIs failed where I don't know why exactly: 10.1002/(SICI)1097-461X(1999)72:4<287::AID-QUA11>3.0.CO;2-Y, 10.1002/(SICI)1099-1360(199711)6:6<320::AID-MCDA164>3.0.CO;2-2. Overall this PR seems now quite stable. For testing the Web translator one can use the page https://gist.github.com/zuphilip/9e5101219f453964eb7e but this will be extended anyway in #1799 by @mrtcode.

Merge pull request #11 from zuphilip/DOI
Update and simplify
@mfenner

This comment has been minimized.

Copy link
Contributor

mfenner commented Jan 31, 2019

@zuphilip SICIs (your failing examples, e.g. 10.1002/(SICI)1097-461X(1999)72:4<287::AID-QUA11>3.0.CO;2-Y) are a pain to work and break many things. Luckily there a child of the 1990s and no new ones are generated (Crossref stopped supporting them at some point I think).

Address review comments
Update tests
@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Feb 2, 2019

This is working nicely for me now & a clear improvement over current behavior. It's a major change, though, so would be good to get another set of eyes on this before merging.
@dstillman @mrtcode would one of you be willing to take a look?

if(typeof item[field] != 'string') continue;
//check for control characters that should never be in strings from CrossRef
if(/[\u007F-\u009F]/.test(item[field])) {
item[field] = decodeURIComponent(escape(item[field]));

This comment has been minimized.

@zuphilip

zuphilip Feb 3, 2019

Collaborator

Can we do the same here without the escape function which one shouldn't use anymore?

This comment has been minimized.

@adam3smith

adam3smith Feb 4, 2019

Author Collaborator

Not easily. Neither JSON.parse (SO's top answer) nor encodeURIComponent (which is almost the same as escape), work exactly the same way (and don't work for the case referenced here), so we'd have to build a function for this specific purpose. Since escape is going nowhere -- it's technically not even deprecated -- I'd be inclined to just leave it.

This comment has been minimized.

@zuphilip

zuphilip Feb 4, 2019

Collaborator

I played a little around and it seems that the following code might work:

var escaped = item[field].replace(/[^0-9A-Za-z ]/g, function(c) {
	return "%" + c.charCodeAt(0).toString(16);
});
item[field] = decodeURIComponent(escaped);

However, I don't understand what we do here conceptually. Escaping and then decoding again should more or less give the same string as we have in the beginning...

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Feb 4, 2019

Note to self -- we still need to adjust priorities; no need to keep content-negotiation at 80

@dstillman

This comment has been minimized.

Copy link
Member

dstillman commented Feb 4, 2019

(This should be "DOI Content Negotiation.js" (no "s"), right?)

@adam3smith

This comment has been minimized.

Copy link
Collaborator Author

adam3smith commented Mar 27, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.