Improving access and delivery of academic content - a survey of current & emerging trends
Update Nov 2021 - A relook at GetFTR , Libkey, Exlibris Quicklinks, and other linking and authentication technologies
While allowing users to gain access to paywalled academic content aka delivery services is often seen to be less sexy than discovery it is still an important part of the researcher workflow that is worth looking at. In particular, I will argue that in the past few years we have seen a renewed interest in this part of the workflow and may potentially start to see some big changes in the way we provide access to academic content in the near future.
Note: The OA discovery and delivery front has changed a lot since 2017, with Unpaywall been a big part of the story, but for this blog post I will focus on delivery aspects of paywalled content.
1.0 Access and delivery - an age old problem
1.1 RA21, Seamless Access and getFTR
1.2 Campus Activated Subscriber Access (CASA)
1.3 Browser extensions/"Access Brokers"
1.4 Content syndication partnership between Springer Nature and ResearchGate (new)
1.5 Is the sun slowing setting on library link resolvers?
1.6 The Sci-hub effect?
1.7 Privacy implications
For more posts on delivery on this blog.
Disclosure I am currently on the advisory committee of Coalition for Seamless Access, however everything in this blog post is my own personal opinion and is not authorised, nor necessarily endorsed by the organization.
1.0 Access and delivery - an age old problem
While access and delivery of academic content can be considered the less sexy cousin of academic discovery, a case could be made today that while the difficulty of discovery has receded somewhat, users still face workflows for delivery of content that are inconsistent and full of friction particularly if they start their research off-campus and/or not starting off library pages.
This in fact has the theme of my blog post from the beginning over 10 years ago, where I started to blog about proxy bookmarklets and browser toolbars that made it easier for users to access library resources from their browsers.

Some blog posts on the delivery issue
But if this problem is at least a decade old, why have we seen an uptick in interest recently to solve the issue? As I will argue later, some of this is due to the Sci-hub effect and publishers growing interest in capturing the research workflow and exploiting analytical data, to prevent "leakage"
But what problems in delivery do we have to solve?
I personally see delievering access to academic content as relating to two issues.
Firstly, when users land on a paywalled piece of academic content, they are often stuck trying to figure out if they have the entitlements to allow access to that particular piece of academic content. Part of the solution here involves designing systems to allow seamless authentication for the user so that the content provider can allow appropriate users to access the content.
Secondly, is what we have traditionally called the "Appropriate copy problem" - where the user is not limited to checking if they have access to the content on the site they are on, but can also be redirected to other options. For example, a user landing on Sciencedirect should be able to authenicate and be redirected to other useful options like full text copies on aggregators like Ebsco, a open access copy or simply to Document delivery services.
Related to these two issues is the effectiveness of the links provided, access links (particularly those based on openurl protocol) have often been unstable as such a solution to improve access/delivery of content should ideally address these issues.
Resources: If you are totally unfamilar with standard access management methods libraries use e.g. concepts like
IP recognition
Proxies and VPNs
SAML and Federated authentication
The free ebook - Access to Online Resources, A Guide for the Modern Librarian might be helpful in giving a simple to read and relatively short tour of the concepts. That said the author is affiliated with Open Athens (that develops and supports identity and access management software.). Alternatively read my "Understanding Federated identity, RA21 and other authentication methods"
1.1 RA21, Seamless Access and GetFTR
Arguably a lot of the poor user experience with delivery can be traced to the dominance of IP recognition of academic resources and the use of ezproxy for users to access when off-campus.

User with right IP (in campus) is allowed access
A possible alternative and solution lay in the use of federated authentication but employment of this technology for library online resources his has never been consistently done across institutions and content providers, leading to very complicated and difficult to use interfaces.
Below shows some interface screens you see when trying to do federated authenication on ScienceDirect

So many ways of signing in. Do you click on OpenAthens?

Complicated labels and forms you see if you try using federated authentication methods
In 2017, the RA21initative started off exploring ways to change this and it slowly gathered steam and cumulated in the launch of it's successor organization - the Coalition of Seamless Access (backed by organizations like NISO and STM ) based on the recommended practices of RA21.
So what is Seamless Access? It is
"a service designed to help foster a more streamlined online access experience when using scholarly collaboration tools, information resources, and shared research infrastructure."
The key component of the service is that it uses federated authentication (via SAML) that allows users to access academic content in a as seamless manner as possible.
Below is an excellent youtube video released by https://seamlessaccess.org on the concept of federated authentication.
Platforms (service providers) that implement this, have three "favours" to use "Limited", "Standard" and "Advanced" which scales with implementation complexity and control, but I will not go into details.
Part of the work of the organization is to pilot and work with various stakeholders from service providers (e.g. Publishers), Identity providers (e.g. Institutions, Libraries) and Researchers, to set standards for consistent user workflows and intutive interface designs.
The idea here is regardless of which content provider you are on, the user experience to sign in will be the same.

Seamless access login buttons on Springer-Nature journal page
They also work on "attribute release policies", which will determine how much service providers (publishers, content providers) will know about the user when they sign-in.
Currently uptake of Seamless access isn't widespread yet, though you can see the Seamless Access login buttons (usually labelled "Access through your institution" start to appear in journals on SpringerNature and Elsevier.
GetFTR - an application of Seamless Access - solving the appropriate copy problem?
While Seamless Access service helps with access to content on the site you are on viafederated authentication for content providers that have implemented the service, it does not directly handle the appropriate copy problem, where the user is on another platform like a discovery service and the service needs to figure out where to send the user.
My understanding of #GetFTR is that it attempts to solve the appropriate copy problem, which library systems addressed with OpenURL standard in mid-2000s. It seems a ‘metasearch’-like approach using APIs to determine access opportunities
— Todd Carpenter (@TAC_NISO) December 4, 2019
Enter the new GetFTR service launched in Dec 2019, which is backed by five publishers - American Chemical Society, Elsevier, Springer Nature, Taylor and Francis Group and Wiley.
An application of Seamless Access (but independent of the project), it builds on SA work so that discovery services can automatically detect user entitlements (even when not on publisher sites) and provide "enhanced links" to rapidly access research on publisher websites if the user has access, hence providing another solution to the appropriate copy problem.
(There were also inital concerns that the initative seems to be a publisher related one only which would cut out aggregator copies, but GetFTR announced they are "committed" to support them as well)
Here's how it is described to benefit researchers on the GetFTR site
"When viewing search results on a discovery tool or scholarly collaboration platform, researchers can easily tell which content their institution has made available to them via the GetFTR indicator. They can then follow the enhanced links provided by GetFTR to rapidly access research on publisher websites."
From my own personal librarian viewpoint, this looks very much like an attempt to improve the current link resolvers (which are partly based on Openurl and other technologies). More interestingly it is suggested
For users who do not have access based upon their institutional affiliation, participating publishers can provide access to an alternative version of the research, which will go beyond the abstract, enabling the user to better understand the nature of the article (e.g. a preprint).
On first glance this seems very generous for publishers to provide access to alternative versions, but as we shall see later publishers might actually have some incentive to consider doing this to plug leakages.
Some discovery services and platforms piloting it include Mendeley, ReadCube Papers, and Dimensions.
1.2 Campus Activated Subscriber Access (CASA)
Another parallel development to RA21/Seamless Access is the implementation of Campus Activated Subscriber Access (CASA) in Google Scholar, which also helps users with more streamlined access when the user is off-campus.
Below shows roughly how it works.

https://insights.uksg.org/articles/10.1629/uksg.360/
Without going into details, roughly how it works is when you are accessing Google Scholar links when on-campus, Google Scholar is able to associate your account with the institution, such that when you are off-campus (not within IP range) it can use this stored information to provide access
As I noted earlier this year in "A belated look at Campus Activated Subscriber Access (CASA) or "off-campus access links" in Google Scholar", this has been quietly implemented with many publishers.
Known as the function "off-campus access links" in Google Scholar, you can turn it on or off (it was default "on" for me) in your Google Scholar settings.

If all goes well, when you are off-campus and access a link via Google Scholar, you will then see a grey button with the label "PDF" generated by CASA on the side of the page.

Platforms that support this includes JSTOR, Gale, APA, Springer, APS, Ingenta Connect , Highwire , Project MUSE.
1.3 Browser extensions/"Access Brokers"
Another delivery trend that has picked up steam in the past few years is the rise of browser extensions that users install to help with access and delivery.
As earlier mentioned, this blog post will not cover the rise of the Open Access Discovery/Delivery browser extensions like Unpaywall, Open Access Button and Core Discovery and focus on browser extensions that help with access to subscribed content , though it is important to note, that these extensions tend to provide supplmentary access to Open Access or free to read content as well via OA fnding APIs like Unpaywall and CORE discovery.

Number OA finding/delivery browser extensions
Today we see a variety of such browser extensions both free ones like Google Scholar button and Lazy Scholar as well as commerical solutions like Lean Library, Libkey Nomad, Anywhere Access and Kopernio (freemium) that help users more easily access items via their nstitutional affiliations.

I have reviewed and compared the state of these classes of products fairly recently in 2019 but things are still developing.

Kopernio browser extension - providing access links to PDF
These extensions all have a core functionality of helping users get to academic content (though the mechanism itself may vary) and most others have also specific unique functionality less related to delivery.
For example , Lean Library has a unique LibraryAssist functionality so librarians can set custom messages to appear on specific pages, while Libkey Nomad provides overlays on reference sections of Wikipedia.
Kopernio , Libkey Nomad bring users not just to the article abstract page, but often are capable of bringing the user directly to the PDF download link.

Custom message on our to access Financial Times for our users advising them how to get their own individual account.
For the purposes of this blog post, I will not go into the details of each extension but it is important to state that as of 2020, there has been some acquisition such that the commerical services are now owned by existing players in the Scholarly communication industry. For example Lean Library was acquired by Sage and Kopernio by Clarivate both in 2018. While Access Anywhere has always been under Digital Sciences.
The latest entry to this area is Libkey Nomad by ThirdIron with it's suite of Libkey products and probably well known for the Browzine service.

These class of products are also something known as "Access broker" , a term coined by RA21 in their RA21 Position Statement on Access Brokers (see also Scholarly Kitchen article and my response)
Reading the statement, I can fairly say that RA21 statement is somewhat hostile to Access broker browser extensions and it is unclear if the name will stick to this class of products. More on that later.
Then again when you look at the etymology of the phrase the "Big Bang theory", you realise that the phase "Big Bang theory" was coined by someone who opposed the theory, it is not unheard of for a name coined by the opposition to stick.
1.4 Content syndication partnership between Springer Nature and ResearchGate - entitlements from user profile
A somewhat different approach to handling delivery is the idea of syndication.
In recent years, ResearchGate has been one of the successful third party sites in drawing academic scholar and researcher eyeballs. ResearchGate which was launched in 2008, outcompeted it's many rivals for the race to be the social network for academia and is currently king of the hill in this area.
Most people know of ResearchGate's legal woes with major publishers due to the fact it is alleged that ResearchGates hosts a lot of illegal content or rather versions of papers that should not be made freely available .
However in 2018, ResearchGate managed to reach a deal with a few publishers, namely Springer Nature, Cambridge University Press and Thieme over the sharing of their content on the platform.
Notably, Springer-Nature started a pilot syndication partnership with Researchgate. In a nutshell, this involves Springer-Nature feeding content to the ResearchGate platform and provide access via ResearchGate user's entitlements.
Here's how this partnership is described
"Selected full-text articles published in Nature, Nature-branded research journals, and
Springer-branded journals are directly uploaded (syndicated) at the point of
publication via a dedicated content feed to ResearchGate and onward to the relevant
authors’ ResearchGate profiles. "
It is important to note that the "full-text articles" here are VoR (version of record) copies i.e final published papers and not just author accepted manuscripts.

Popup user sees on ResearchGate when they are able to access content via their institutional entitlements
"Registered ResearchGate users who have access to an article included in the Springer
Nature-ResearchGate partnership through their institution can read and download
the full-text PDF of the article with no additional login or verification required"
It also notes currently during the trial , ResearchGate users who try to access an article where they do not have the entitlements for would be allowed to view a preview version of the final article.
On reading this, the question on everyone's mind is how does ResearchGate know what a user's entitlements are? Is it using getFTR? Not quite.
When it first launched, there was a lot of speculation on how ResearchGate figured out what the user's entitlements were. Was it merely what they declared in their ResearchGate profile? Could and would users game that?
As I write this, a report analysing the effectiveness of the pilot states
"Authentication through ResearchGate is a multi-layered process. First, the IP address ischecked, and if authentication does not occur this way, then the user’s affiliation andemail address — both of which are fields on ResearchGate profiles — are checked.ResearchGate user profiles contain affiliations that help increase the likelihood that auser’s entitlement is recognised. "
The report goes on to study the accuracy of such entitlement decisions and found that "that over 95% of entitlement decisions made by ResearchGate were confirmed by Springer Nature."
They correctly point out that, if authenticating researchers on ResearchGate based on their
ResearchGate profiles works out well enough, it would solve a lot of the delivery issue when the user was not on campus/IP ip authenticated.
It is important to note that providing access via syndication is quite a different method of delivery compared to the other methods already mentioned above,
This is because while the other methods ensure that the downloading of content still occurs on the content provider's site , syndications feeds the full text onto another platform (ResearchGate) and downloading occurs there.
As Librarian in institutions use downloads to determine whether to renew journal packages, syndication of content where ResearchGate is fed content (PDF, XML) and downloading by ResearchGate users occurs on ResearchGate can lower the downloads statistics (typically COUNTER statistics) , leading to apparent decrease in value of the journal title found and accessed on ResearchGate.(See later discussion on "leakage").
The solution to this is to ensure platforms like ResearchGate support , Distributed Usage Logging (DUL) - a COUNTER standard that will allow downloads on other platforms like ResearchGate to be tracked and combined with the traditional usage captured at publisher sites.

1.5 Is the sun slowing setting on library link resolvers?
The major library link resolvers in the market include SFX, 360Link, Alma Uresolver and Ebsco's Linksource/Full text finder. The first library link resolvers were primarily Openurl based, but as time went by, they incorporated other ways of linking e.g. via Crossref dois, IEDL , while Ebsco link resolver and SFX seems to be most flexible with multiple ways to link to full text and services e.g. various custom links and full text ebsco links.
GetFTR leading to a complete replacement of the Library Link resolver?
Yet as we have seen quite a few of the access/delivery options already mentioned above have the potential to supplement or even completely replace our existing library link resolvers.
In the category of completely replace, the GetFTR service seems to be exactly posed to do so or at least worries librarian into wondering if this is the aim.
In Why are Librarians Concerned about GetFTR? Lisa Hinchliffe writes.
The GetFTR website makes it seem like GetFTR is a user-facing service with a distinct interface of its own and this is particularly messaged in the image showing a GetFTR icon on a user screen. This makes it appear that GetFTR replaces the library link resolver rather than working alongside or, potentially, within in. Since the link resolver is a mechanism through which libraries can route users to an appropriate copy for a variety of purposes, it is not surprising that libraries are concerned about this.
From the library point of view allowing GetFTR to be a complete replacement for the library link resolver is problematic for many reasons. Even leaving aside the issue of control and privacy, the library link resolver provides additional services beyond just providing full text, something that getFTR is not equipped to do so.
Of course it is still early days for GetFTR and most of the commentries I cited were in response to the initial announcement, so there might not be a full understanding of it yet.
For example, Herbert Van de Sompel who is well known for the development of OpenURL commented in response to the GetFTR announcement that he was worried about how it might lead to centralization of services and wonders about the actual purpose of GetFTR, while acknowledging he still wants to know more and wants "see an architectural diagram"
That could have been integrated in existing OpenURL infrastructure. Hence I wonder about the real goal. Could it be about centrally collecting user interaction data (results, clickstreams) that can be used for recommender services, network metrics, cf eg https://t.co/3ZS772mNLE
— Herbert (@hvdsomp) December 4, 2019
Also it is unclear currently how platforms that implement GetFTR will interact with existing implementations of the library link resolver, in the short run it is unclear if these platforms with GetFTR implementions will suddenly drop support of Library link resolvers (if they were there already).
As I write this "Dimensions, Figshare, Symplectic, and Mendeley have started a pilot of the GetFTR service" in April 2020.
Libkey link - a complement?
In the area of supplement, the linking technologies used by ThirdIron's Libkey suite seems to be a front runner to be used as a supplement to the library link resolver.
ThirdIron first launched the product Browzine, which provided users on mobile or desktop a consistent journal browzing experience.
They have since leveraged their linking technology dubbed Libkey into a suite of services including
Libkey Nomad - already mentioned library browser extension that links users to full text
Libkey Delivery - Embedding Libkey links to popular discovery services including Summon, Primo and EDS
Libkey Link - a "Link Resolver Accelerator", that you include in place of your usual library link resolver on platforms like databases, journal platforms
Libkey.io - More on that later
Let's focus on Libkey Link. Thirdiron wants you to replace the base URL of your library link resolver at various databases and platforms like Scopus, Web of Science and even Google Scholar with your Libkey base URL.
The idea here is that when your user click on the "find full text" button on the platform, Libkey.io gets the "first bite of the apple", and only if it can't handle the request to get full text, then it passes it down to your usual library link resolver.
An example with Scopus and Libkey Link
In the example below, I have replaced the link resolver baseURL in Scopus with Libkey Link instead of our usual library lnk resolver.
When the user first clicks on the usual link resolver link on Scopus, he may see something like the below.

Libkey Link menu on clicking the find full text button in Scopus
Like any ThirdIron product, Libkey link is able to use the institutional holdings sent to them to determine that the user has access to the full text.
The displayed page looks pretty much like a normal link resolver page. So why use Libkey Link?
The resulting interface is nice and simple, but with some work you can make your link resolver page equally good, so that alone should not be a reason to do so.
Firstly, Libkey Links tend to provide more stable linking than traditional library link resolver links (particularly those using OpenURL).
Secondly, unlike many Library link resolvers that tend to bring you to the article page, Libkey Link usually gives you a direct PDF download link (not supported by all full text providers). If you are the type that always wants to download the PDF (and it seems most people are indeed like this), this feature will not only save you a click but also save you time hunting for the download button on different journal platforms.
Thirdly, it claims to be faster in response.
If you check the "automatically remember format choice for 24 hours", you can set it to automatically go to the PDF everytime.
Libkey link cannot replace your library link resolver
Now you might be wondering if this is so great does it mean we can get rid of the library link resolver? The answer is no, this is not it's purpose and ThirdIron has categorically stated that this will not happen and instead calls Libkey Link a "Link Resolver Accelerator".

This is because unlike a library link resolver, Libkey link is quite limited in the type of the content it can link to.
Firstly, it needs to have an identifer (doi, PMID) and secondly it needs to be something known to ThirdIron (Thirdiron has the metadata). In practice this means it can usually link only to journals articles with dois in fairly well known journal titles. It alone cannot handle ebooks or other types of non online material at all.
As such if it encounters something it can't handle, or it knows based on holdings that your user does not have access to, it will fall back to your usual library link resolver.
Clearly, one can see the Libkey Link has a component of the library link resolver (in fact it reminds me of doi resolution though it is capable of handling aggregator copies from Ebsco or Proquest) , though an external one that gets the first try to "accelerate" full text finding of items it specializes in finding (items with unique identifers).
1.6 The Sci-hub effect?
When talking about delivery and access of academic content, it is impossible to avoid talking about Sci-hub.

So how popular is Sci-hub? When did it become fairly mainstream?
Using Google Trends, we can see that while Sci-hub began in 2011, there was a spike in search interest in Sci-hub in Feb and May 2016 and again in Dec 2017 in the United States.

One of the spikes coincided with John Bohannon's Who's downloading pirated papers? Everyone piece in Science magazine published in April 2016.
Interestingly, a lot of the initatives such as RA21 also began shortly after Sci-hub gained prominance.
Correlation is not causation of course , but one of the major findings of Bohannon piece is that the heaviest users of Sci-hub are not merely users who pirate papers but also includes many researchers who probably have access via their institutional entitlements but still prefer to use Sci-hub!
Is poor user experience when accessing content driving researchers away?
In a section entitled "Need or convenience?", Bohannon suggested that "Many U.S. Sci-Hub users seem to congregate near universities that have good journal access." and that many users might be using Sci-hub even though they had legitmate access through their institutions because using Sci-hub was much easier. There is also this interesting quote from a publisher
And for all the researchers at Western universities who use Sci-Hub instead, the anonymous publisher lays the blame on librarians for not making their online systems easier to use and educating their researchers.
A study by librarian Bianca Kramer from Utrecht University using the released dataset of Sci-hub user ip logs (also here) attempted to study the issue of whether for downloads attributed to Utrecht users could in theory be accessed elsewhere via open access or through institutional subscriptions. She found
"Overall, 75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%). In so far as these downloads were requested by academic users (i.e. affiliated with Utrecht University), use of Sci-Hub for these materials could be seen as ‘convenience’. For the remaining 25%, Sci-Hub use could be seen as a way of getting access to articles that are not freely available through publishers."
While Sci-hub alone cannot be the only reason for the surge in interest in improving the delivery workflow from publishers (for example Roger Schonfeld was talking about stumbling blocks to research in 2015,), it might have been a wake up call to push this issue further into prominance.
On Scholarly kitchen - always a good barometer of what the Scholarly publishers are thinking we see articles worrying about "leakage" started to appear.
I've often found it hard to wrap my mind over what publishers mean by "leakage" , but it seems to mean that downloads or use of their content occuring outside their usual sites. As we have seen in the discussion with ResearchGate, the ultimate fear is that because such use is not recorded, the value of their content is obscured and this might lead institutions to start cancelling.
That perhaps explain the fact that the announcement for GetFTR even says "Publishers Announce a Major New Service to Plug Leakage"
Incidently,the earliest article I could find on Sci-hub on Scholarly kitchen was dated Feb 2016 , which is the month of the first spike.
Is providing delivery by DOIs or other PIDs something we should support?
As noted above some have commented on the ease of use of Sci-hub as a reason for it's success.
Enter a doi or PMID and it brings you to the paper. Part of the attraction of course is that no user authenication is needed but could it also be that searching by identifers is part of the magic here since it avoids the messiness of known item searching problems particularly if the title is generic?
Perhaps thinking along these lines, ThirdIron's new Libkey.io service tries to exploit this and provides a Sci-hub inspired looking site that allows users to enter DOIs and PMIDs to get to the full text akin to Sci-hub.

Some skeptics might wonder what evidence do we have that users are expecting to find full text by searching by dois?
In a recent CNI Spring 2020 Virtual Meeting, UIUC reported a fair number of doi searches per day which can probably be attributed to Sci-hub familarity and as such proposed the importance of handing such user cases.
In illinois search, a lot of cut and paste, 64% are known item (up from 56%). 1.5% are known citation searches, 76 per days are doi searches (scihub effect). Should library system focus on known tiem search/delivery? Age old q I have been wondering since I started in this area pic.twitter.com/JWPaN9cmUh
— Aaron Tay (@aarontay) May 27, 2020
While UIUC's bento search system dubbed Easysearch (enhanced with context specific help features that try to bring uses directly to PDF) can handle doi searches by recognizing and suggesting links to them , do the popular off the shelf library discovery services do the same?

illinois library bento search recognises DOIs via suggestions
I'm going to take Primo as an example (Summon is probably similar but I haven't checked).
Firstly there is no advanced field search for DOIs in Primo, though you can vote for the idea here.
But few users even do advanced searches anyway, but would adding the doi directly to the search bar give decent results?
Unfortunately, currently support for doi searching in Primo is not very good, for example searching for common strings you see copied and pasted won't work - e.g. http://dx.doi.org/10.1016/j.acalib.2008.10.014
There is another even worse problem.
In the Primo idea exchange - proposal - Remove CDI constant expansion of results, it is noted that because in Primo, query expansion occurs almost all the time it can cause a number of problems.
In CDI, expansion is constant, by term inflection applied to all searches, as well as higher recall in general by design. This cannot be prevented by features such as Boolean operators, quotation marks or Advanced Search,
With respect to searches with DOIs, the following might occur
Use case: If a user searches for a DOI, then they expect only that specific resource. CDI returns dozens of results with often no indication by term highlighting or snippets to explain why. This is discovered only after a timeconsuming check on the full text to be because the DOI is in the Reference List. There is no clear pathway to the actually correct known item, and this is not consistently fixed just by ranking changes. For example, if we do not hold that article in full text, the user sees dozens of results, none of which are correct
In fact when I think about it current library discovery solutions generally only support DOI delivery in one way, via the old school, almost never seen Citation linkers!

I'm not suggesting a return to those days of course, but a single search bar with context smart helper function to detect a DOI has been entered and to react accordingly (as in UIUC's bento search ezhelper) seems to be something worth doing?
Of course even with such a system we replicate only part of the Sci-hub magic, because part of the Sci-hub attraction is not just the ease of access by PIDs but also that you almost always get instant access because Sci-Hub provides access to nearly all scholarly literature, while with library institutional solutions you will only get instant access some of the time.
1.6 The battle for privacy and questioning of agendas
Throughout the description of the various new and not so new access options, I have stuck to a mostly neutral description of all these different delivery trends , technologies and methods.
Of course, in reality, the greater the success in adoption of one method will likely lead to the reduction in the need of other methods. That said some methods might not be mutually exclusive e.g. Lean Library extension can allow users to authenicate via SAML/OpenAthens or may have value beyond just faciliating access.
One of the greater debates around adoption of these methods is the degree of user privacy that may be lost and the potential tradeoffs between that and user experience (though not everyone thinks the trade-off needs to be made).
Despite this debate, I think everyone agrees that IP recognition plus Ezproxy (the dominant method) in theory provides the best privacy protection from users (though there are ways around it like any method e.g. cross-site tracking that is almost as good) but as we have said from the start has poor user experience.
RA21 and Seamless Access claims to marry user experience with minimal privacy exposure and as already mentioned in RA21 Position Statement on Access Brokers they have made a strong statement that while tools that they term as access brokers (e.g. Kopernio, Lean Library) are useful in the short run, they require more work for users to setup and may have the potential to cause privacy concerns as such browser extensions need to be active in tracking the pages you are on as you surf the net.
The Google backed CASA of course has privacy concerns as well, do you want Google to know even more about your users than what they already know?
A lack of trust?
That said, it is clear to many librarians that most publishers have generally put their support behind the RA21/Seamless Access (even though it is now run under the aspices of STM and NISO) which leads many to be suspicious of the true intentions of the publishers and which ultimately points to a trust issue).
And this is without getting into the GetFTR project which is run solely by publishers and is not connected to RA21 directly.
Besides the cliched - "Data is the new oil" and the common refrain to get into the workflow of researchers and users with the accompanying focus on "user analytics" , it is hard not to believe that many supporters of RA21/Seamless Access are not only looking to ease access for users but have other agendas.
Commenters have suggested that publishers want to keep users on their platforms to reduce "leakage" (whether it is because users get Open Access copies in institutional repositories or illegal copies via Sci-hub) which allows them to be in the workflows. Also while Seamless Access options if set correctly can reduce privacy risks to a minimal levels, there is concern that this often won't be done.
Given that RA21 has come out to state that in the long run they aim to eliminate IP access and not just to be content to coexist, this is a huge sticking point if you still believe IP access and proxy servers have a place
I will not go into the details of such debates, but I suggest the following further reading as starting points
What Will You Do When They Come for Your Proxy Server? - First early reponse by Lisa Hinchliffe on RA21.
Myth Busting: Five Commonly Held Misconceptions About RA21 (and One Rumor Confirmed) - Todd Carpenter
In defense of the Proxy Server - Cody Hanson
The New Plugins — What Goals Are the Access Solutions Pursuing? - Kent Anderson
Conclusion
Some readers might be thinking that this might be a odd time for us to start focusing on improving user experiences with delivery given that the coming of open access might make a lot of this moot.
There are two answers to this. Firstly open access even in the most optimistic of projections will still have a decade or more to go and is likely to cover only journal articles. Libraries will still need to provide access to other licensed resources (A&I indexes, image archives etc) that will not be covered by Open Access.
The other reason is that some content providers even in a open access world would still want users to authenticate, so they can track usage and users.

