Database connection according to drop-down list - laravel-7

I am an interesting project, but I have a problem that I would like to solve before starting.
I am in the process of building an extranet for a franchise. The franchise therefore has several franchisees in several geographic sectors. They will have to connect from the same login page (same application). In this page there will be a region selector There will be a database per franchisee (constraint imposed)
Do you know a method so that depending on the region selector, the user connects to the chosen base?
for example:
region paca => base 1
region center => base 2
etc,
I know Laravel relatively well, but I have never had the problem so far. If anyone has an idea or would have had the same scenario, I would be very grateful if they would share their info.
Thank you in advance for your help

There is multiple ways to achieve what you want, it depend of what is duplicate.
Just, you have to store the information somewhere, and to have an object that say "option", for example:
"regionA": {
"db": "dbA",
"name": "Region A"
}
Just manage it as you want, that's why I write it in JSON.
Each region have a different server
You just redirect people on good website, and each website use it's own database
One server, but different schema in database
Use different connection, according to the region. For example, the region "A" use the database "software_a".
For example, one will do:
DB:connection("db_regionA")
Another will do:
DB:connection("db_regionB")
One database, one table, all rows says it's region
Just in SELECT request, such as:
SELECT * FROM mytable WHERE region = "A"

Related

Passing More than 1000 parameters in RESTful api

Our Dashboard have dropdown which consist of more than 8k products and we have such 4-5 dropdowns.
I want to filter data based on this dropdowns.
But if I selecting all products then restful api url is breaking.
Can i use azure bus service or similar message broker service to pass this complex and multiple parameters via service and which then consumed by all apis??
While I do understand what your need is I will suggest to re-think your approach as that will lead to a better user experience.
I will use a first drop down, completely empty, that will fill in options while the user types in whatever product list you have, that way, there is only a group of items that are filled in that the user is interested in.
I will also suggest not to load anything on that drop down until the user has entered, let's say, 5 characters.
After that let the user select an item from the dropdown, and filter the second drop down and continue to use the same technique with all drop downs you have.
Let me know if you have any questions around this approach and I will be more than happy to provide an example if you don't know how to do it.
Edit:
Including samples for load dropdown lists dynamically and also another post with an example on how to return some JSON from an Azure Function that can return the data in a dynamic way like you need.
From what I can see I think it'll be helpful for you to go through several different examples that can bring you progressive knowledge to what you need to achieve, first of all it'll be nice if you know how to load items dynamically on a dropdown list:
That is a nice example, you can have an entry where the user can type
whatever products you have and then they type, lets say, more than 5
characters go and search for that data.
https://www.codebyamir.com/blog/populate-a-select-dropdown-list-with-json
This other example shows you how to return data from an azure function
with the data that you need to show the users. This is not exactly
what you need as you need to receive a parameter with what the user
typed and search in your database for the items that match that search
and return that data in a JSON format so you can use it on your web
page to show the dropdown list. I will suggest to initially hard code
a few items and return those to check that the functionality is there
and once you have that up and running move to get the actual data from
the database.
How to return a JSON object from an Azure Function with Node.js

Odoo API Dynamically Load Data to Different Instances

Anyone here knows how to post data to 2 different instances of Odoo at the same time? Below is my codebase
for key,value in customers.items(): 
    for val in value:
          models.execute_kw(db,uid,password,'res.partner','create',[
            {
              "contact_type": val['contact_type'],
              "mobile": val['mobile'],
              "name": val['name'],
              "phone": val['phone'],
              "ref": val['ref'],
              "state_id":val['state_id'],
              "x_studio_after_sales_sc_suppor":val['sc_name'],
              "x_studio_birthday": val['x_studio_birthday'],
              "x_studio_genre": val['x_studio_genre'],
              "x_studio_spoken_language":val['lang_name'],
              "company_id":2,
            }
          ])
I want to dynamically load customer information to the 2 instances by just changing the company_id field value; 1 for default instance and 2 for the second instance. My codebase so far is able to load data to the default instance, the second instance throws permission errors. Any ideas on how to resolve this?
edit:
In that case you have to check if your user has rights to write for second company. user -> Allowed Companies
Also all related objects have to be accessible to by second company. (i cant see any relations from your example)

Kentico - WHERE condition for custom Page Types page

I have a custom page type for employees, and one of the fields is Location. I want to show/filter only employees in "San Jose" or "San Francisco" and used this WHERE condition below but it didn't work. Apparently, I missed something very basic. Could you help?
Location LIKE '%San%';
I did another test, where instead of page type, I used custom table with the exact field names and was able to filter using the same statement. On a related note, I'm new to Kentico and exploring which is more suitable for creating/maintaining a list of about 100 employees - Page Types or Custom Tables - with the ability to filter by department, location etc. Appreciated your input here as well. Best!
If you're adding the WHERE condition into a standard Kentico repeater or other data source, the syntax looks right except you do not need the semicolon ";".
You'll also want to double check the field name, and if you are limiting your query to certain columns (as is best practice especially for larger data sets) and be sure the field you are filtering on is being selected.
Regarding the management of your employee list, either method you've described will work. In that scenario it typically depends on who will be editing the content, and how frequently. It is more editor-friendly, in my opinion, to add those documents into the content tree. This also gives you quicker control over the order, and keeps it similar to how other content on the site is maintained. I also like to set up folders or other parent page types as categories if needed, so the documents can be dragged and dropped between them and it sets up a visual taxonomy that isn't possible if it's all stored in a table. Storing items in the tree also allows for workflow and versioning, as well as more granular control over permissions/access, if this is important to you.
It's awesome that you are thinking about how to best store your data in advance. There many factors to consider such as overall number of records, number of columns, the fact whether you need to use workflow, versioning, preview etc..
The best source of information regarding this would be this article which summarizes all options you have and gives clear explanations of which to use in which scenario.
And to your original question - What components are you using to display the data? Is the repeater? If so, can you make sure to set the Page types property to match the page type you are displaying? If the page types is not configured, Kentico will not load any custom fields because it doesn't know from which table it should load the data from.
Additionally make sure to either include the "Location" field in the Columns property or leave the columns blank (not recommended because then Kentico loads all columns which is like 200. when you count all from CMS_Document, CMS_Tree etc..)
Below is the framework that I use to debug whenever I wish to add a repeater and is facing some problem.
First get all the columns instead of accessing limited columns. Fetching all columns will make sure that I don't have any problem retrieving data.
If I am missing any particular column information name, then I would double check the column name.
I verify this by firing up SQL server management studio and access data from page type table or custom table.
If access to SQL server is not available(generally in Azure hosted solutions with restrictive access to DB), I would enable SQL debug from the settings and see what query repeater is generating to see if it is correct.

Load Runner facing isse during Dyanmic data handling

I am using load runner 9.5. I am facing a problem during Dyanmic data handling. Scenario is given below-
I have Library management application. Login-> Select book(data display based on User credential) -->Purchage and Logout.
Ex: Guest user: 50 Books display to choose
Admin : All Books display choose
Normal user : 100 Bokks display choose
Please help me How to handle these type of dynamic data based on user role. Is there need to create different script with different role ?
Please follow the below steps -
Record the same flow with the same user credential 2 times (Replica of first script)
Compare the scripts using W diff
Find the values which are different like purchase order, timestamp and user session.
Correlate the values which are highlighted in yellow means value which is different in each script.
Have you had the benefit of training in LoadRunner and a mentor for your first year of work in this field?

Look for unique ID pattern which easy indexed by search engines

Like from Microsoft - "KB2756872" or from National Vulnerability
Database - "CVE-2010-1428" or from Red Hat - "RHSA-2010:0376" or
from OIDs - "1.3.6.1.4.1.311" or from UUID/GUID
- "550e8400-e29b-41d4-a716-446655440000".
I want to put several jobs to UIDs. See next...
I develop blog software and have idea to put unique ID in body of
each post so can easily identify that copy from local storage is
correspond to remote published copy.
Also I want to post to many different blogging services so if one
is down articles will be accessible from another. So link can
dead but if I add UID - anyone can try web-search to find post on
another service!
Also this allow to gather some article spreading
statistics. Many sites just replicate content (copy-writing and
rewriting bots and people) to broke search engines. With UID I
easily can identify such sites...
So my question how is to make UIDs (in which form) so it would be
easily indexed by search engines (web, like Google/Yahoo, and
corporate, like Lucene/Solr/Sphinx/Xapian/etc).
I know about some limitation of search engine like:
only >= 3 chars for each search part
it was not indexed dust like gfh6wytrh6wu56he5gahj763
so this task s not easy...
Any advice is appreciated (books/blog articles/etc).
You could use Tag URIs, as defined by RFC 4151.
They are globally unique, and everyone who owned a domain name or an email address for at least a day can mint them.
Note that these URIs only identify, they don’t locate. So a Tag URI doesn’t say anything about where something is published.
Let’s say your site’s domain is "example.com". If you create a blog post, you could create the following Tag URI:
tag:example.com,2012-12:cute-cat
Note that the date in this URI is not a publication date! It must be a (past) date on which you owned the domain (resp. email address). If you registered your domain in 2003, you could always use Tag URIs starting with tag:example.com,2004: (not "2003", because "2003" would mean "2003-01-01", which might be a time where you didn’t own the domain yet), followed by a (unique) string under your control. However, if you like you could always use the publication date, of course. But don’t use future dates.
You can use year and number based article identifier just like CVE identifiers. Since you need revisions as well, you can append dot after the identifier to clarify the version. For example, for an AWesome Blog Service, AWBS-2012-1.0 would refer to original document, AWBS-2012-1.1 would refer to first revision etc.
However, you need to make sure that AWBSs are unique before you use them. CVEs are assigned manually from the pool. You would probably need some kind of service that assigns AWBS from pool. It could be a simple database query.

Resources