We are planning to use New Relic on our website hosted in AWS.
What would be the best option - using the copy / paste javascript or install agent?
I am new to new relic monitoring. I thought I can use Install agent, but it looks like I have to expose the licence key. This is an open source project, so exposing the licence key is not an option. Is there any alternative to that?
If I go with copy/paste Javascript, Do I need to paste the script on every page?
Can someone throw best practice here?
There are different types of New Relic monitoring including server monitoring, application monitoring and browser monitoring. Also if your application has other components such as a database, there will be a separate monitoring agent for that.
I'm assuming you are asking about node.js application monitoring since you tagged your question with node.js. Best practice here is to configure the newrelic.js settings script to pull your app_name and license_key values from a system environment variable instead of adding them directly to the source.
Related
I just started working with EPiServer (Sitecore before) and looking for a way how to synchronize content between environments automatically (developer-developer and developer-QA env).
We have our QA environment on an azure virtual machine and need to synchronize content during CI/CD.
EPiServer DXC Service doesn't meet the requirements because we are not working on a web app service.
Any ideas? Is there any already existing way to achieve it?
Automatic synchronization between environments has [more-or-less] gone away within the Episerver platform. The old way of doing this was using mirroring, but that's not available in the DXP, and is eventually being removed from the platform in favor of other strategies:
For moving bulk data (content items, content types, categories, visitor groups, etc.) between different environments, without touching code or the database, use the "Import Data" and "Export Data" tools within Admin mode. More information here: http://webhelp.episerver.com/latest/en/cms-admin/exporting-importing-data.htm
For bigger bulk migrations of data between environments, typically a database backup and restore is done between environments. Obviously, this is a bit more risky when involving a production environment.
If the content (or a content type change) is required as part of a deployment, you can build a content migration step. More information here: https://www.gulla.net/en/blog/renaming-an-episerver-page-property-using-a-migration-step/ and https://world.episerver.com/documentation/developer-guides/CMS/Content/Refactoring-content-type-classes/
If you are simply wanting to move authored content from a staging environment to production, it's suggested to create all content in production and use Episerver's Projects feature. More information here: https://webhelp.episerver.com/latest/en/cms-edit/projects.htm
If you are using Azure DevOps for CI/CD you might want to look into Epinova DXP Extension to Azure DevOps - that uses the Episerver deployment API but makes this easier to set up your pipelines. See https://www.epinova.no/en/folg-med/blog/2020/episerver-dxp-content-harmonization-with-epinova-dxp-deployment/ for more info.
Take a look at the Deployment API. There might be something in there for you. https://world.episerver.com/documentation/developer-guides/digital-experience-platform/deploying/episerver-digital-experience-cloud-deployment-api/
I am building a SharePoint 2013 provider-hosted app using the high-trust model. This allows a customer to deploy the .app to their App Catalog and make it available to all SharePoint Sites. The provider-hosted portion of the app runs in an IIS box (cluster) which the customer also deploys (on-premise) with setup instructions and automated tools.
The .app file structure includes the application manifest - which specifies the precise endpoint where the provider-hosted portion resides, and also specifies whitelisted endpoints which the add-in can call. These are all specified by entering in URLs, hostnames, and port numbers into edit fields in Visual Studio in the 'Deploy App' form just before the .app file is built and digitally signed.
This seems to work just fine for a single app built by IT folks internally, if the org is small enough... but I really want to be able to distribute this solution to more than one customer. In order to do so, I would have to ask the customer for their respective endpoints, enter them into my build tools, and rebuild the .app for them. This just doesn't seem right... no customer wants to talk to the developer first and have a custom-built app. And why should they? No code is changing...
Upon investigation into the .app file format, it turns out it is really just a simple .zip file - and inside (voila!) there is the app manifest! Unfortunately, if you edit the app manifest and re-zip the file, the digital signature is broken, and the .app no longer works. (grrrr...)
What I want to do is simply reconfigure the app manifest to match the environment where it is deployed. This can happen programmatically during setup/installation time, or perhaps even just prior to download, but cannot be a process that involves developers typing into visual studio and pressing Rebuild. That simply won't scale.
Is there a tool that exists that can help with this problem? If not, does anyone have experience with the signing of .app files programmatically? I'm open to skinning this cat in any way possible.
This is a wild idea and not maybe even possible.
Create web ui, where clients enter their endpoints.
Have internal process that invokes MSBUILD/TFS to package app with endpoint
change app manifest with pre-build powershell
Then provide app via email or download?
http://www.sharepointconfig.com/2013/10/building-sharepoint-2013-apps-with-tfs-2013/
This is more of a workaround than a true answer - but would work:
For on-premise deployments of high-trust SharePoint 2013 apps - build the application with "known endpoints" - essentially hard-coded endpoints that can be deployed locally. Then instruct the customer to redirect those endpoints using DNS records or hosts file entries. In addition, the client would need to generate a local wildcard certificate signed by their own trusted root in order to satisfy the SharePoint 2013 app model requirements for appdomain and server-to-server communication.
This is by no means ideal, but for certain environments it might be the most practical approach. This also allows scaling for the IIS WebApp to occur at the customer-site, where it realistically belongs for a high-trust app.
This approach avoids the need to automate build tools and also avoids building a separate instance for every customer - both of which are somewhat undesirable. It might, for those reasons, be slightly less costly - but it also pushes some responsibility to the customer. Namely - hard-coding a DNS entry locally for machines in the topology.
I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.
I am aware of weblogic templates, but out of curiosity I wanted to know, Is it ok to copy a domain in weblogic in situations where we need to have the same configuration? I have already done the same and have been successful in testing my application.
You can get away with doing this, but there are a couple of more reliable (and scriptable) ways to migrate the same configuration through the development team, or to create new deployment environments.
The domain template builder lets you build your own custom domain template from an existing domain: http://download.oracle.com/docs/cd/E13179_01/common/docs92/tempbuild/starttb.html
There's a couple of ways to get it done with WLST, as well:
You can use configToScript to spit out an entire WLST script (and properties file) to recreate the exact configuration you've got, or...
You can use readDomain and writeDomain in offline mode to recreate an existing configuration in a new domain:
readDomain: http://download.oracle.com/docs/cd/E13222_01/wls/docs92/config_scripting/reference.html#wp1003638
writeDomain: http://download.oracle.com/docs/cd/E13222_01/wls/docs92/config_scripting/reference.html#wp1003688
It's okay to copy the domains over and it worked exceptionally well prior to WebLogic 9.2. However, there are some weird bugs that pop up for versions that are using the portal for the console.
Also, after copying the file you would want to make sure that all listen addresses and ports have been modified accordingly so that your local managed server doesn't attempt to connect to the production administration server on startup.
I am seeking advice: Ideally, I would like to give an Administrator (of the web server) one file (.exe, .msi, .bat, whatever you suggest), so that when they execute the package, it will setup my application (contains .aspx, .xap silverlight, web service .svc, etc.) on IIS. This will include and certainly not be limited to such things in the IIS Manager, like creating a virtual directory, path, default document, security, and all of the IIS settings one finds via inetmgr and properties. I would also maybe like to run a .bat file (not sure if this correct), but to check for certain settings and pinging other servers for status.
Many years ago, I used to automate everything and used concepts like .bat files - got the job done and it was amazing what I could do. Fast forward a couple of years now and am approaching the automation process again. I wanted to know if there is anything new out there.
Any and all advice will be greatly appreciated!
It's quite a bit of a learning curve but yes, WiX / InstallShield / MSI can do this. I've done installers for n-Tier / SOA systems including single tenant SaaS where you could run the application layer installer dozens of times creating new instances running on different host headers or ports pointed to different data layers and different configuration settings. You could then do the same for the WebUI pointing to which ever application layer you want.
Basically whether it's instaling .NET, setting up vDir / AppPools / WebSites / Extensions, reading and writing XML config files, executing SQL scripts, creating services and so on it can all be done... if you take the time to learn it all. Deployment Engineering is a bigger domain then it first appears to be.
As for .BAT, that's bad form. First you work to leverage native capabilities before writing custom actions. Then when you do have to write one, you design it to be declarative and transactional ( install, uninstall, rollback, commit ). WiX has a really nice framework called DTF that allows you to encapsulate C# classes as if they were C++ from MSI's perspective and provides a nice interop library needed to talk to MSI during the install.
Visual Studio has a Web Setup Package project you can use for this.