Refresh Cache in Spring - spring-cache

How I can refresh spring cache when I inserted data into Database through my services and when I added data directly into database.Can we achieve this?.
Note:
I am using following libs
1)net.sf.json-lib
2)spring-support-context

through my services
This is typically achieved in your application's services (e.g. #Service application components) using the Spring #Cacheable, #CachePut annotations, for example...
#Service
class BookService {
#Cacheable("Books")
Book findBook(ISBN isbn) {
...
return booksRepository().find(isbn);
}
#CachePut(cacheNames = "Books", key = "#book.isbn")
Book update(Book book) {
...
return booksRepository.save(book);
}
}
Both #Cacheable and #CachePut will update the cache provider as the underlying method may callback through to the underlying database.
and when I added data directly into database
This is typically achieved by the underlying cache store. For example, in GemFire, you can use a CacheLoader to "read-through" (to your underlying database perhaps) on "cache misses". See GemFire's user guide documentation on "How Data Loaders Work" as an example and more details.
So, back to our example, if the "Book (Store)" database was updated independent of the application (using Spring's Caching Annotation support and infrastructure), then a developer just needs to define a strategy on cache misses. And, in GemFire that could be a CacheLoader, and when...
bookService.find(<some isbn>);
is called resulting in a "cache miss", GemFire's CacheLoader will kick in, load the cache with that book and Spring will see it as a "cache hit".
Of course, our implementation of bookService.find(..) went to the underlying database anyway, but it only retrieves a "single" book. A loader could be implemented to populate an entire set (or range) of books based on some criteria (such as popularity), where the application service expects those particular set of books to be searched for by potential customers, using the application, first, and pre-cache them.
So, while Spring's Cache annotations typically work per entry, a cache store specific strategy can be used to prefetch and, in-a-way, "refresh" the cache, lazily, on the first cache miss.
In summary, while the former can be handled by Spring, the "refresh" per say is typically handled by the caching provider (e.g. GemFire).
Hopefully this gives you some ideas.
Cheers,
John

Related

Node.js express app architecture with testing

Creating new project with auto-testing feature.
It uses basic express.
The question is how to orginize the code in order to be able to test it properly. (with mocha)
Almost every controller needs to have access to the database in order to fetch some data to proceed. But while testing - reaching the actual database is unwanted.
There are two ways as I see:
Stubbing a function, which intends to read/write from/to database.
Building two separate controller builders, one of each will be used to reach it from the endpoints, another one from tests.
just like that:
let myController = new TargetController(AuthService, DatabaseService...);
myController.targetMethod()
let myTestController = new TargetController(FakeAuthService, FakeDatabaseService...);
myTestController.targetMethod() // This method will use fake services which doesnt have any remote connection functionality
Every property passed will be set to a private variable inside the constructor of the controller. And by aiming to this private variable we could not care about what type of call it is. Test or Production one.
Is that a good approach of should it be remade?
Alright, It's considered to be a good practice as it is actually a dependency injection pattern

Node typescript library environment specific configuration

I am new to node and typescript. I am working on developing a node library that reaches out to another rest API to get and post data. This library is consumed by a/any UI application to send and receive data from the API service. Now my question is, how do I maintain environment specific configuration within the library? Like for ex:
Consumer calls GET /user
user end point on the consumer side calls a method in the library to get data
But if the consumer is calling the user end point in test environment I want the library to hit the following API Url
for test http://api.test.userinformation.company.com/user
for beta http://api.beta.userinformation.company.com/user
As far as I understand the library is just a reference and is running within the consumer application. Library can for sure get the environment from the consumer, but I do not want the consumer having to specify the full URL that needs to be hit, since that would be the responsibility of the library to figure out.
Note: URL is not the only problem, I can solve that with environment switch within the library, I have some client secrets based on environments which I can neither store in the code nor checkin to source control.
Additional Information
(as per jfriend00's request in comments)
My library has a LibExecutionEngine class and one method in it, which is the entry point of the library:
export class LibExecutionEngine implements ExecutionEngine {
constructor(private environment: Environments, private trailLoader:
TrailLoader) {}
async GetUserInfo(
userId: string,
userGroupVersion: string
): Promise<UserInfo> {
return this.userLoader.loadUserInfo(userId, userGroupVersion)
}
}
export interface ExecutionEngine {
GetUserInfo(userId: string, userGroupVersion: string): Promise<UserInfo>
}
The consumer starts to use the library by creating an instance of the LibraryExecution then calling the getuserinfo for example. As you see the constructor for the class accepts an environment. Once I have the environment in the library, I need to somehow load the values for keys API Url, APIClientId and APIClientSecret from within the constructor. I know of two ways to do this:
Option 1
I could do something like this._configLoader.SetConfigVariables(environment) where configLoader.ts is a class that loads the specific configuration values from files({environment}.json), but this would mean I maintain the above mentioned URL variables and the respective clientid, clientsecret to be able to hit the URL in a json file, which I should not be checking in to source control.
Option 2
I could use dotenv npm package, and create one .env file where I define the three keys, and then the values are stored in the deployment configuration which works perfectly for an independently deployable application, but this is a library and doesn't run by itself in any environment.
Option 3
Accept a configuration object from the consumer, which means that the consumer of the library provides the URL, clientId, and clientSecret based on the environment for the library to access, but why should the responsibility of maintaining the necessary variables for library be put on the consumer?
Please suggest on how best to implement this.
So, I think I got some clarity. Lets call my Library L, and consuming app C1 and the API that the library makes a call out to get user info as A. All are internal applications in our org and have a OAuth setup to be able to communicate, our infosec team provides those clientids and secrets to individual applications, so I think my clarity here is: C1 would request their own clientid and clientsecret to hit A's URL, C1 would then pass in the three config values to the library, which the library uses to communicate with A. Same applies for some C2 in the future.
Which would mean that L somehow needs to accept a full configuration object with all required config values from its consumers C1, C2 etc.
Yes, that sounds like the proper approach. The library is just some code doing what it's told. It's the client in this case that had to fetch the clientid and clientsecret from the infosec team and maintain them and keep them safe and the client also has the URL that goes with them. So, the client passes all this into your library, ideally just once per instance and you then keep it in your instance data for the duration of that instance

Why is data access tightly coupled to the Service base in ServiceStack

I'm curious why the decision was made to couple the Service base class in ServiceStack to data access (via the Db property)? With web services it is very popular to use a Data Repository pattern to fetch the raw data from the database. These data repositories can be used by many services without having to call a service class.
For example, let's say I am supporting a large retail chain that operates across the nation. There are a number of settings that will differ across all stores like tax rates. Each call to one of the web services will need these settings for domain logic. In a repository pattern I would simply create a data access class whose sole responsibility is to return these settings. However in ServiceStack I am exposing these settings as a Service (which it needs to be as well). In my service call the first thing I end up doing is newing up the Setting service and using it inside my other service. Is this the intention? Since the services return an object I have to cast the result to the typed service result.
ServiceStack convenience ADO.NET IDbConnection Db property allows you to quickly create Database driven services (i.e. the most popular kind) without the overhead and boilerplate of creating a repository if preferred. As ServiceStack Services are already testable and the DTO pattern provides a clean endpoint agnostic Web Service interface, there's often not a lot of value in wrapping and proxying "one-off" data-access into a separate repository.
But at the same time there's nothing forcing you to use the base.Db property, (which has no effect if unused). The Unit Testing Example on the wiki shows an example of using either base.Db or Repository pattern:
public class SimpleService : Service
{
public IRockstarRepository RockstarRepository { get; set; }
public List<Rockstar> Get(FindRockstars request)
{
return request.Aged.HasValue
? Db.Select<Rockstar>(q => q.Age == request.Aged.Value)
: Db.Select<Rockstar>();
}
public RockstarStatus Get(GetStatus request)
{
var rockstar = RockstarRepository.GetByLastName(request.LastName);
if (rockstar == null)
throw HttpError.NotFound("'{0}' is no Rockstar".Fmt(request.LastName));
var status = new RockstarStatus
{
Alive = RockstarRepository.IsAlive(request.LastName)
}.PopulateWith(rockstar); //Populates with matching fields
return status;
}
}
Note: Returning an object or a strong-typed DTO response like RockstarStatus have the same effect in ServiceStack, so if preferred you can return a strong typed response and avoid any casting.

ServiceStack Service structure for predominantly read-only UI

I'm getting started with ServiceStack and I've got to say I'm very impressed with all it has under the bonnet and how easy it is to use!
I am developing a predominantly read-only application with it. There will likely be updates to the database 3 or 4 times a year but the rest of the time the solution will be displaying data on an electronic information board (large touch screen monitor).
The database structure is well normalised with a few foreign keyed tables and with this in mind I think it may be best to separate the read only API from the CRUD API. The CRUD API can be used to create and modify the relational data with POCO classes matching the database tables. I would then ensure the read-only API flattens the relational data into a few POCOs spanning a few db tables making the data easier to handle on the read-only UIs.
I'm just looking for ideas and advice really on whether this separation of concerns is wasted effort or if there is a better way of achieving what I need? Has anyone had similar thoughts / ideas?
Having developed a similar read only application (a gazetteer, updated quarterly/yearly) using ServiceStack we went with optimizing the API for reads, making use of the built in caching:
// For cached responses this has to be an object
public object Any(CachedRequestDto request)
{
string cacheKey = request.CacheKey;
return this.RequestContext.ToOptimizedResultUsingCache(
base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<RequestService>())
{
return service.Any(request.TranslateTo<RequestDto>()).TranslateTo<CachedResponseDto>();
}
});
}
Where CacheKey is just:
public string CacheKey
{
get
{
return UrnId.Create<CachedRequestDto>(string.Format("{0}_{1}", this.Field1, this.Field2));
}
}
We did start creating a CRUD / POCO service, but for speed went with using bulk import tools such SQL Server DTS/SSIS or console apps which suffices for now, and will revisit this later if required.
Might want to consider something like CQRS.
https://gist.github.com/kellabyte/1964094 (or Google for CQRS Martin Fowler, can only post 2 links).
Also found the following article valuable recently when starting to implement additional search type services: https://mathieu.fenniak.net/stop-designing-fragile-web-apis/

Extending log4net - Adding additional data to each log

We're working on logging in our applications, using log4net. We'd like to capture certain information automatically with every call. The code calling log.Info or log.Warn should call them normally, without specify this information.
I'm looking for a way to create something we can plug into log4net. Something between the ILog applications use to log and the appenders, so that we can put this information into the log message somehow. Either into ThreadContext, or the LogEventInfo.
The information we're looking to capture is asp.net related; the request url, user agent, etc. There's also some information from the apps .config file we want to include (an application id).
I want to get between the normal ILog.Info and appender so that this information is also automatically included for 3rd party libraries which also use log4net (Nhibernate, NServiceBus, etc).
Any suggestions on where the extensibility I want would be?
Thanks
What you are looking for is called log event context. This tutorial explains how it works:
http://www.beefycode.com/post/Log4Net-Tutorial-pt-6-Log-Event-Context.aspx
In particular the chapter 'Calculated Context Values' will interesting for you.
Update:
My idea was to use the global context. It is easy to see how this works for something like application ID (in fact there you do not even need a calculated context object). Dynamic information like request url could be done like this:
public class RequestUrlContext
{
public override string ToString()
{
string url;
// retrieve url from request
return url;
}
}
The object is global but the method is called on the thread that handles the request and thus you get the correct information. I also recommend that you create one class per "information entity" so that you have a lot of flexibility with the output in the log destination.

Resources