Following https://learn.microsoft.com/en-us/azure/application-insights/app-insights-usage-send-user-context, I thought it would be easy to get cross-schema tracking of a user. However, I'm finding the absolute opposite.
I created the telemetry initializer (which the document has bugs in it hardcore):
public void Initialize(ITelemetry telemetry)
{
if (HttpContext.Current?.Session == null)
return;
if (HttpContext.Current.Session["UserId"] == null)
{
HttpContext.Current.Session["UserId"] = Guid.NewGuid().ToString();
}
telemetry.Context.User.Id = (string)HttpContext.Current.Session["UserId"];
telemetry.Context.Session.Id = HttpContext.Current.Session.SessionID;
var authUser = _sessionManager.GetAuthenticatedUser<UserDetails>();
if (authUser != null)
{
telemetry.Context.User.AuthenticatedUserId = authUser.UserId;
}
}
Then I went and added it to App Insights
TelemetryConfiguration.Active.TelemetryInitializers.Add(new UserTrackingTelemetryInitializer());
I then played with my site, expecting this stuff to start showing up. It did not. I continued to get random strings for user_Id and session_Id (things like NVhLF and what not). So, I thought, okay, maybe it's logging before I update those values? I went and inserted my initializer first:
TelemetryConfiguration.Active.TelemetryInitializers.Insert(0, new UserTrackingTelemetryInitializer());
Same thing. So I started to look at schemas I don't usually look at. Nothing. So I pulled up traces and I found it. Finally, there is where my data is going. But the other schemas don't have the updated values, so what use is this? While traces is showing the expected values for user_Id and session_Id, the others continue to show garbage. Am I doing something wrong?
The document you followed does not work indeed, a feedback has been submitted here.
Just for your reference, the way I can find to set these values is that use such TrackEvent() / TrackRequest() or other Trackxxx() methods after implemented your own telemetry initializer
Related
I'm trying to use a background task to gather Likes/Comments from the Facebook Graph APi and use that to drive our blog's trending.
Here the trendingModels have already been populated and are being used to fill in the TrendingParts.GraphId and TrendingParts.TrendingValue.
I'm not getting any exceptions and the properties on TrendingPart point to the fields in the TrendingPartRecord.
Yet nothing persists to the database, any ideas why?
_orchardsServices is IOrchardServices
var articleParts = _orchardService.ContentManager.GetMany<TrendingPart>(
trendingModels.Select(r => r.OrchardId).ToList(),
VersionOptions.Published,
QueryHints.Empty);
// Cycle through the records and update them from the matching model
foreach (var articlePart in articleParts)
{
ArticleTrendingModel trendingModel = trendingModels.Where(r => r.OrchardId == articlePart.Id).FirstOrDefault();
if(trendingModel != null)
{
// Not persisting to the database, WHY?
// What's missing?
// If I'm understanding things properly nHibernate should push this to the db autoMagically.
articlePart.GraphId = trendingModel.GraphId;
articlePart.TrendingValue = trendingModel.TrendingValue;
}
}
Edit:
It's probably worth noting that I can update and publish the fields on the TrendingPart in the admin panel but the saved changes don't appear in the MyModule_TrendingPartRecord table.
The solution was to change my Service to a transient dependency using ITransientDependency.
The service was holding a reference to the PartRecords array and because it was treated as a Singleton it never disposed and the push to the database was never made.
I think I'm missing something about how to search in JavaMail.
Download empty messages from a folder
Create a new SearchTerm that matches your results
Filter (yourFolder.search) the results using your search term.
This works. But - why do it this way? If I'm using javamail to connect to something like gmail, the search isn't being executed server-side, and it doesn't seem like there is any advantage to using the whole javax.mail.search.SearchTerm constructs in terms of efficiency or reducing the amount of data that needs to be sent over the network...
I don't see any way that executes a search on the server side and returns a list of matches. Any ideas?
EDIT: Including pseudocode of what I'm doing now, which doesn't execute any search on the server-side. Even if I converted this to use SearchTerm it still wouldn't be doing anything server-side, right?
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", "gimaps");
final Session session = Session.getDefaultInstance(props, null);
final GmailSSLStore store = (GmailSSLStore) session.getStore("gimaps");
store.connect(ADDRESS, PASSWORD);
final GmailFolder allMailFolder = (GmailFolder) store.getFolder("[Gmail]/All Mail");
allMailFolder.open(Folder.READ_ONLY);
final Message[] allMessages = allMailFolder.getMessages();
System.out.println("Messages:" + allMessages.length);
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.ENVELOPE);
allMailFolder.fetch(allMessages, fp);
for (final Message message : allMessages) {
final Address[] addrs = message.getFrom();
if (addrs != null) {
for (final Address addr : addrs) {
if (addr.toString().toLowerCase().contains("george")) {
System.out.println(addr.toString());
}
}
}
}
You're doing something wrong, but you haven't provided enough details about what you're doing for us to know what you're doing wrong.
Are you using IMAP?
Show us some code and the debug output.
If you're searching in an IMAP folder using the predefined SearchTerm implementations, it will try to perform the search on the server. Look at the implementation of SearchSequence.generateSequence. In your example you would probably want to use FromStringTerm.
If you're using the gimap provider, you can also use Google's IMAP extensions, which include the X-GM-RAW search attribute, allowing you to search exactly like in the Gmail web interface. The java implementation is in GmailRawSearchTerm and only works server-side.
I'm trying to make my MVC4-website check to see if people should be alerted with an email because they haven't done something.
I'm having a hard time figuring out how to approach this. I checked if the shared hosting platform would allow me to activate some sort of cronjob, but this is not available.
So now my idea is to perform this check on each page-request, which already seems suboptimal (because of the overhead). But I thought that with using an async it would not be in the way of people just visiting the site.
I first tried to do this in the Application_BeginRequest method in Global.asax, but then it gets called multiple times per page-request, so that didn't work.
Next I found that I can make a Global Filter which executes on OnResultExecuted, which would seemed promising, but still it's no go.
The problem I get there is that I'm using MVCMailer to send the mails, and when I execute it I get the error: {"Value cannot be null.\r\nParameter name: httpContext"}
This probably means that mailer needs the context.
The code I now have in my global filter is the following:
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
base.OnResultExecuted(filterContext);
HandleEmptyProfileAlerts();
}
private void HandleEmptyProfileAlerts()
{
new Thread(() =>
{
bool active = false;
new UserMailer().AlertFirst("bla#bla.com").Send();
DB db = new DB();
DateTime CutoffDate = DateTime.Now.AddDays(-5);
var ProfilesToAlert = db.UserProfiles.Where(x => x.CreatedOn < CutoffDate && !x.ProfileActive && x.AlertsSent.Where(y => y.AlertType == "First").Count() == 0).ToList();
foreach (UserProfile up in ProfilesToAlert)
{
if (active)
{
new UserMailer().AlertFirst(up.UserName).Send();
up.AlertsSent.Add(new UserAlert { AlertType = "First", DateSent = DateTime.Now, UserProfileID = up.UserId });
}
else
System.Diagnostics.Debug.WriteLine(up.UserName);
}
db.SaveChanges();
}).Start();
}
So my question is, am I going about this the right way, and if so, how can I make sure that MVCMailer gets the right context?
The usual way to do this kind of thing is to have a single background thread that periodically does the checks you're interested in.
You would start the thread from Application_Start(). It's common to use a database to queue and store work items, although it can also be done in memory if it's better for your app.
I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?
As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity
In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx
in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;
Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}
The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.
You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}
I've created a snippet that pulls data from a databse table and displays it in tabular format. The snippet takes an id as parameter, and this is added to the sql query.
My problem is that if I've got more than 1 snippet call (sometimes need the tabular data for different id's displayed on a page) on the same page, all table data is the same as the last database call that's been made by the last snippet.
What do I need to do to kinda not cache the snippet database calls and have them all display their own content?
I've tried setting the page to no cache-able. Also used the [! !] brackets for the snippet calls, and even used the function_exists() method, but none of them helped.
Please can someone help me?
thanks
Try this at the end of the snippet:
mysql_connect('host', 'user', 'pass');
mysql_select_db('db_name');
You need to specify the connection parameters ofcourse.
It would help to answer if you can post your snippet. I do this with multiple calls on the page without issue, so there is either something wrong inside the snippet, or you need to output to unique placeholder names.
You have encountered a glitch of ModX, and it took me a long time to solve. ModX does a lot of caching by using hashing and apparently, when multiple connections are made from within one page divided over multiple snippets, this erratic behaviour can be seen. This is most likely very unwanted behaviour, it can be solved easily but gives you terrible headache otherways.
One sympton is that $modx->getObject($classname, $id)returns null (often).
The solution is very simple:
either use a static class with a single db instance, or
use $modx->setPlaceholder($instance, $tag);, or a combination.
My solution has been:
class dt__xpdo {
private function __construct() {}
public function __destruct() {
$this->close();
}
static public function db($modx = null) {
if ($modx->getPlaceholder('dt_xpdo') == '') {
$dt_user = 'xxxxxxxxx';
$dt_pw = 'xxxxxxxxx';
$dt_host = 'localhost';
$dt_dbname = 'xxxxxxxxx';
$dt_port = '3306';
$dt_dsn = "mysql:host=$dt_host;dbname=$dt_dbname;port=$dt_port;charset=utf8";
$dt_xpdo = new xPDO($dt_dsn, $dt_user, $dt_pw);
$dt_xpdo->setPackage('mymodel', MODX_CORE_PATH.'components/mymodel/'.'model/', '');
//$modx->log(modX::LOG_LEVEL_DEBUG, 'mymodel.config.php');
//$modx->log(modX::LOG_LEVEL_DEBUG, 'Could not addPackage for mymodel!');
$modx->setPlaceholder('dt_xpdo', $dt_xpdo);
}
return $modx->getPlaceholder('dt_xpdo');
}
}
Now you can use in your code:
require_once 'above.php';
and use something like
$xpdo = dt__xpdo::db($modx);
and continue flawlessly!