In my end to end tests, I want to drop the "test" database, and then create a new test db. Dropping an entire database is simple:
mongoose.connection.db.command( {dropDatabase:1}, function(err, result) {
console.log(err);
console.log(result);
});
But now how do I now create the test db? I can't find a createDatabase or useDatabase command in the docs. Do I have to disconnect and reconnnect, or is there a proper command? It would be bizarre if the only way to create a db was as a side effect of connecting to a server.
update
I found some C# code that appears to create a database, and it looks like it connects, drops the database, connects again (without disconnecting?) and that creates the new db. This is what I will do for now.
public static MongoDatabase CreateDatabase()
{
return GetDatabase(true);
}
public static MongoDatabase OpenDatabase()
{
return GetDatabase(false);
}
private static MongoDatabase GetDatabase(bool clear)
{
var connectionString = ConfigurationManager.ConnectionStrings["MongoDB"].ConnectionString;
var databaseName = GetDatabaseName(connectionString);
var server = new MongoClient(connectionString).GetServer();
if (clear)
server.DropDatabase(databaseName);
return server.GetDatabase(databaseName);
}
mongodb will create (or recreate) it automatically the next time a document is saved to a collection in that database. You shouldn't need any special code and I think you don't need to reconnect either, just save a document in your tests and you should be good to go. FYI this same pattern applies to mongodb collections - they are implicit create on write.
Related
This YouTube video #27:20 talks about populating the cache with routing info to avoid latency during a cold start.
You can either try to get a document you know doesn't exist, or you can use CosmosClient.CreateAndInitializeAsync().
I already have this code set up:
private async Task<Container> CreateContainerAsync(string endpoint, string authKey)
{
var cosmosClientBuilder = new CosmosClientBuilder(
accountEndpoint: endpoint,
authKeyOrResourceToken: authKey)
.WithConnectionModeDirect(portReuseMode: PortReuseMode.PrivatePortPool, idleTcpConnectionTimeout: TimeSpan.FromHours(1))
.WithApplicationName(UserAgentSuffix)
.WithConsistencyLevel(ConsistencyLevel.Session)
.WithApplicationRegion(Regions.AustraliaEast)
.WithRequestTimeout(TimeSpan.FromSeconds(DatabaseRequestTimeoutInSeconds))
.WithThrottlingRetryOptions(TimeSpan.FromSeconds(DatabaseMaxRetryWaitTimeInSeconds), DatabaseMaxRetryAttemptsOnThrottledRequests);
var client = cosmosClientBuilder.Build();
var databaseResponse = await CreateDatabaseIfNotExistsAsync(client).ConfigureAwait(false);
var containerResponse = await CreateContainerIfNotExistsAsync(databaseResponse.Database).ConfigureAwait(false);
return containerResponse;
}
Is there any way to incorporate CosmosClient.CreateAndInitializeAsync() with it to populate the cache?
If not, is it ok to do this to populate the cache?
public class CosmosClientWrapper
{
public CosmosClientWrapper(IKeyVaultFacade keyVaultFacade)
{
var container = CreateContainerAsync(endpoint, authenticationKey).GetAwaiter().GetResult();
// Get a document that doesn't exist to populate the routing info:
container.ReadItemAsync<object>(Guid.NewGuid().ToString(), PartitionKey.None).GetAwaiter().GetResult();
}
}
The point of CreateAndInitialize or BuildAndInitialize is to pre-establish the connections required to perform Data Plane operations to the desired containers (reference https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes#routing).
If the containers do not exist, then it makes no sense to use CreateAndInitialize or BuildAndInitialize because there are no connections that can be pre-established/warmed up, because there are no target backend endpoints to connect to. That is why the container/database information is required, because the only benefit is warming up the connections to the backend machines that support that/those container/s.
Please see CosmosClientBuilder.BuildAndInitializeAsync which creates the cosmos client and initialize the provided containers. I believe this is what you are looking for.
I am using a User Defined Function across multiple collections within multiple Cosmos databases. Is there a way to store it somewhere and deploy it to all of these collections/databases at once? Or a way to update them all at the same time? Currently I am having to go through and manually update each UDF within each collection within each database.
You can write console application for updating UDF-
private async Task<string> CreateUDFAsync(string collectionUri, string udfName, string udfBody)
{
ResourceResponse<UserDefinedFunction> response = null;
try
{
var existingUdf = await this.cosmosDbClient.ReadUserDefinedFunctionAsync($"{collectionUri}/udfs/{udfName}");
existingUdf.Resource.Body = udfBody;
response = await this.cosmosDbClient.ReplaceUserDefinedFunctionAsync(existingUdf.Resource);
}
catch (DocumentClientException ex)
{
response = await this.cosmosDbClient.CreateUserDefinedFunctionAsync(collectionUri,
new UserDefinedFunction
{
Id = udfName,
Body = udfBody
});
}
return response.Resource.AltLink;
}
It will replace existing UDF and Create new in case missing
At the Cosmos DB resource model structure stored procedures, UDFs, merge procedures, triggers and conflicts are Container level resources.
You have to create them for each container.
I know that this question was asked already, but it seems that some more things have to be clarified. :)
Database is designed in the way that each user has proper privileges to read documents, so the connection pool needs to have a connection with different users, which is out of connection pool concept. Because of the optimization and the performance I need to call so-called "user preparation" which includes setting session variables, calculating and caching values in a cache, etc, and after then execute queries.
For now, I have two solutions. In the first solution, I first check that everything is prepared for the user and then execute one or more queries. In case it is not prepared then I need to call "user preparation", and then execute query or queries. With this solution, I lose a lot of performance because every time I have to do the checking and so I've decided for another solution.
The second solution includes "database pool" where each pool is for one user. Only at the first connection useCount === 0 (I do not use {direct: true}) I call "user preparation" (it is stored procedure that sets some session variables and prepares cache) and then execute sql queries.
User preparation I’ve done in the connect event within the initOptions parameter for initializing the pgPromise. I used the pg-promise-demo so I do not need to explain the rest of the code.
The code for pgp initialization with the wrapper of database pooling looks like this:
import * as promise from "bluebird";
import pgPromise from "pg-promise";
import { IDatabase, IMain, IOptions } from "pg-promise";
import { IExtensions, ProductsRepository, UsersRepository, Session, getUserFromJWT } from "../db/repos";
import { dbConfig } from "../server/config";
// pg-promise initialization options:
export const initOptions: IOptions<IExtensions> = {
promiseLib: promise,
async connect(client: any, dc: any, useCount: number) {
if (useCount === 0) {
try {
await client.query(pgp.as.format("select prepareUser($1)", [getUserFromJWT(session.JWT)]));
} catch(error) {
console.error(error);
}
}
},
extend(obj: IExtensions, dc: any) {
obj.users = new UsersRepository(obj);
obj.products = new ProductsRepository(obj);
}
};
type DB = IDatabase<IExtensions>&IExtensions;
const pgp: IMain = pgPromise(initOptions);
class DBPool {
private pool = new Map();
public get = (ct: any): DB => {
const checkConfig = {...dbConfig, ...ct};
const {host, port, database, user} = checkConfig;
const dbKey = JSON.stringify({host, port, database, user})
let db: DB = this.pool.get(dbKey) as DB;
if (!db) {
// const pgp: IMain = pgPromise(initOptions);
db = pgp(checkConfig) as DB;
this.pool.set(dbKey, db);
}
return db;
}
}
export const dbPool = new DBPool();
import diagnostics = require("./diagnostics");
diagnostics.init(initOptions);
And web api looks like:
GET("/api/getuser/:id", (req: Request) => {
const user = getUserFromJWT(session.JWT);
const db = dbPool.get({ user });
return db.users.findById(req.params.id);
});
I'm interested in whether the source code correctly instantiates pgp or should be instantiated within the if block inside get method (the line is commented)?
I've seen that pg-promise uses DatabasePool singleton exported from dbPool.js which is similar to my DBPool class, but with the purpose of giving “WARNING: Creating a duplicate database object for the same connection”. Is it possible to use DatabasePool singleton instead of my dbPool singleton?
It seems to me that dbContext (the second parameter in pgp initialization) can solve my problem, but only if it could be forwarded as a function, not as a value or object. Am I wrong or can dbContext be dynamic when accessing a database object?
I wonder if there is a third (better) solution? Or any other suggestion.
If you are troubled by this warning:
WARNING: Creating a duplicate database object for the same connection
but your intent is to maintain a separate pool per user, you can indicate so by providing any unique parameter for the connection. For example, you can include custom property with the user name:
const cn = {
database: 'my-db',
port: 12345,
user: 'my-login-user',
password: 'my-login-password'
....
my_dynamic_user: 'john-doe'
}
This will be enough for the library to see that there is something unique in your connection, which doesn't match the other connections, and so it won't produce that warning.
This will work for connection strings as well.
Please note that what you are trying to achieve can only work well when the total number of connections well exceeds the number of users. For example, if you can use up to 100 connections, with up to 10 users. Then you can allocate 10 pools, each with up to 10 connections in it. Otherwise, scalability of your system will suffer, as total number of connections is a very limited resource, you would typically never go beyond 100 connections, as it creates excessive load on the CPU running so many physical connections concurrently. That's why sharing a single connection pool scales much better.
i tried to connect documentdb from my asp.net website but i am getting this error.
Entity with the specified id does not exist in the system
DocumentClientException: Entity with the specified id does not exist in the system
code as follows in aspx code behind
protected async void Page_Load(object sender, EventArgs e)
{
Response.Write("Page Load<br/>");
await GetData();
}
public async Task GetData()
{
try
{
Response.Write("<br/> Get Data function Start<br/><br/>");
using (var client = new DocumentClient(new Uri(ConfigurationManager.AppSettings["endpoint"]), ConfigurationManager.AppSettings["authKey"]))
{
//await client.OpenAsync();
RequestOptions reqOpt = new RequestOptions { PartitionKey = new PartitionKey(209) };
var parameters = new dynamic[] { 1 };
StoredProcedureResponse<object> result = await client.ExecuteStoredProcedureAsync<object>(
UriFactory.CreateStoredProcedureUri(ConfigurationManager.AppSettings["database"], ConfigurationManager.AppSettings["pcsd"], "GetMemberbyId"), reqOpt, parameters);
Response.Write(result.Response.ToString());
}
Response.Write("<br/><br/> Get Data function End");
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
stored procedure as follows
function GetMemberbyId(memId) {
var collection = getContext().getCollection();
//return getContext().getResponse().setBody('no docs found');
// Query documents and take 1st item.
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT * FROM root m where m.memberId='+memId,
function (err, feed, options) {
if (err) throw err;
// Check the feed and if empty, set the body to 'no docs found',
// else take 1st element from feed
if (!feed || !feed.length) getContext().getResponse().setBody('no docs found');
else getContext().getResponse().setBody(feed);
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
in my localhost it's working fine but website published to azure web apps and running i am getting above error
I just spent a couple of hours troubleshooting this, only to find that I had firewalled my instance to a point where I could not connect locally. Keep in mind that the Azure portal document query will obviously still work even when you have no direct access via the API / C# client.
Try setting the firewall to allow All Networks temporarily to check access.
I would check in the portal that the "GetMemberbyId" is the name of the stored procedure for the collection you are trying to run it on. Could be the stored procedure is on a different collection or that the stored procedure is named something else.
If that all checks out.. I have had more luck with the __.filter() way of querying documents on the server. See:
http://azure.github.io/azure-documentdb-js-server/
I'd like to override delete operation on my Azure Mobile Services table to make it more like update then real delete. I have additional column named IsDeleted and I'd like to set it's value to true when delete operation is executed.
I figured out, that what I need is:
fire my own 'update' inside del function,
delete current request.execute()
prepare and sent response by myself
That meens my del function should look like that:
function del(id, user, request) {
// execute update query to set 'isDeleted' - true
// return standard response
request.respond();
}
As you can see I'm missing the first part of the function - the update one. Could you help me writing it? I read Mobile Services server script reference but there is no info about making additional queries inside a server script function.
There are basically two ways to do that - using the tables object, and using the mssql object. The links point to the appropriate reference.
Using mssql (I didn't try it, you may need to update your SQL statement):
function del(id, user, request) {
var sql = 'UPDATE <yourTableName> SET isDeleted = true WHERE id = ?';
mssql.query(sql, [id], {
success: function() {
request.respond(statusCodes.OK);
}
});
}
Using tables (again, only tested in notepad):
function del(id, user, request) {
var table = tables.getTable('YourTableName');
table.where({ id: id }).read({
success: function(items) {
if (items.length === 0) {
request.respond(statusCodes.NOT_FOUND);
} else {
var item = items[0];
item.isDeleted = true;
table.update(item, {
success: function() {
request.respond(statusCodes.OK, item);
}
});
}
}
});
}
There is a Node.js driver for SQL Server that you might want to check out.
The script component of Mobile Services uses node.js. You might want to check out the session from AzureConf called Javascript, meet cloud