For .NET based applications, such as an Episerver website, Lime provides a library called DataAccess which is available on Nuget.
In this article, we will not dwell too deeply into the Lime Web Service library but to give you an example, this is what a generic query might look like:
public IEnumerable<Record> ExecuteQuery(string tableName)
var q = new Query(tableName);
// Fields to include
A lot of the time the goal is to own the content, by "converting" data from Lime into browsable Episerver pages. This can be done in two ways:
- custom content providers,
- or by creating a scheduled job
Which one to pick depends a bit on your client and their requirements. Both approaches have the same end result from a visitor's point of view, but also in terms of greater SEO and the ability to reuse data in an efficient way for other functions in the Episerver application.
However, the two approaches are quite different in their implementation and how the content is managed from an editorial point of view.
Content providers acts as a middle-hand between an Episerver platform and an external data source. It means that Episerver can retrieve data immediately from an external source, in this case Lime, and have it function as any regular type of content (i.e. pages, blocks or media). As a result, once new content is published and available through Lime, it should immediately be accessible through your application.
This is definitely a strategy worth considering, but there are a couple of drawbacks to be aware of before taking this approach.
Strong dependency to the external source
The purpose of content providers are to retrieve content directly from an external source. If Lime is inaccessible, and the requested content is not cached, the content provider will fail.
Functionality from Episerver needs to be reimplemented
When implementing a new content provider, you need to tell Episerver how to fetch the new content. You have to reimplement methods such as LoadContent and LoadChildrenReferencesAndTypes as well as dealing with caching, all of which has been done already by Episerver when dealing with pages and blocks.
Expensive database calls
Something to consider when working with either Lime or any other service, is how expensive the requests to said service is. If they result in uncached database calls, this can have severe consequences in terms of site performance.
The above drawbacks can be avoided with ease with another approach, which is why I instead recommend scheduled jobs.
Import as pages with scheduled jobs
A more reliable option to content providers, but with a similar end result, is to run a scheduled import of your Lime catalogue. The benefits of this approach are:
- Content resides among the rest of the website content. Therefor, already imported content will still be available even if Lime happens to be inaccessible
- The content reuse built in fetching and storing functionality, and caching does not need to be reimplemented
- While logging is equally important in all approaches, scheduled jobs provide a history view, in which you can see if previous runs have succeeded or failed, along with additional information that may have been provided
Still a dependency to the external source
With this approach problems can still occur in the event of an error, although often with a smaller impact. At failure new content will not be imported, however since previously imported content remains on your website they will still be accessible.
New content may be delayed
Depending on how often the scheduled job runs, new content published in Lime will have to wait until the next scheduled run. If the content is important and/or planned, the job can of course be executed manually through Episervers administrator view.
I have described two ways of fetching data from an external source that are to be integrated with an Episerver application. Both approaches work and are good from a SEO and editorial perspective. However I often recommend to go with the scheduled job approach, which is also what I did when integrating Lime. The main reason for that is that I know that if there is a failure on the external part, I can still guarantee the functionality of my application.