Email templates - docusignapi

Is there a way to fully customize the HTML emails being send out to customers in any tier plans? (the support agent was unable to answer this question)
If not, are there any plans on adding more customization to the HTML email branding? I'm having issues with the recommended width and height as it looks terrible on an high density pixel ratio screen, ideally I could go around that by uploading it twice the size and then lowering the size down by defining the width of the image, is this something that would be possible in the future? Or even lets say, separate image files for web and emails?
Thanks

Is there a way to fully customize the HTML emails being sent out to customers in any tier plans?
Not beyond the branding feature. Higher tier plans support multiple branding files, eg for different departments within a large org.
Sending emails with HTML to people that could be using any type of mail reading software and hardware from low-res small mobile devices to the high density pixel-ratio screens that your signers have is always a matter of tough compromise.

Related

analysing video stream for conditions

Not entirely new to azure, but new to the Media Services available on azure. I am looking for suggestion on what azure components I should consider to build a solution to analyze video for certain conditions.
(e.g. 1) Presence of a human - Yes/No, 2) alert if no human presence detected for a certain number of minutes, 3) confirmation if identified human is wearing a uniform or not, etc. )
I have built a somewhat similar on-premise solution in the past using OpenCV & some open source ML libraries, not sure what azure services I can use if this will be running in Azure.
I can live stream this to azure and am not looking for an edge solution.
I looked up azure video indexer and it looks promising, but probably more tuned for audio analysis rather then image frame analysis.
suggestions would be appreciated.
Azure video indexer is optimized for files, not streams, but is capable of meeting the requirement since it detect faces and people (in advanced preset).
Regarding uniform or not, this is not supported in video indexer at the moment but ability to detect cloth color will come in the future.
By fragmenting the video, Azure Video Indexer provides a near live solution. It means there will be a few minutes delay, so it depends on how time-sensitive your requirements are.
Regarding your second question, it will be possible to customize a model to identify specific uniforms in a few months. When the bounding boxes of the uniforms match the bounding boxes of the detected people, you can identify if a person is wearing a uniform.

Position Extraction in DocuSign

I've been reviewing the DocuSign documentation to see if this feature is available through API. We currently work with one eSign vendor, OneSpan, who offers the feature of Position Extraction via PDF tags set up in the document (link below for reference). I'm curious if this same functionality is available in DocuSign and have been unable to find it within the documentation.
To give some background on the use case.. Our clients want to set up our documents with PDF tags and use those for creating eSign transactions. The goal of this is so they can be vendor agnostic since the eSign creation would rely on extracting the PDF tags as opposed to explicitly setting height/width and x/y coordinates.
OneSpan Link for Position Extraction:
https://community.onespan.com/documentation/onespan-sign/guides/feature-guides/developer/position-extraction
Edit: So just to clarify what process our clients are looking to do, and some background as well. Our clients have their applications which they use to call our gateway of APIs for creating eSign transactions. Our APIs take in a generic eSign request, which we then convert to the appropriate vendor's eSign request structure before sending it out for creation. Certain clients use certain vendors, which is why we take in a generic request and convert it depending which vendor that client is subscribed to use.
Our clients are migrating away from an old legacy eSign vendor whose X/Y origin begins at the bottom left and also renders the PDF differently during the signing ceremony. When trying to migrate to a new vendor, our clients are facing pretty heavy obstacles in converting the X/Y coordinates, and height/width, so that signatures, fields, etc. appear correctly in the document in the new eSign vendor.
We were trying to think of a way to avoid this kind of problem in the future if we were to ever switch eSign vendors again. One of the ideas we're looking into is setting up PDF tags (some vendors seem to use different terms like "text tags", "anchor tags", etc) in the document itself. So say we have a signature PDF tag with the name "signature1", and this is where the OneSpan Position Extraction comes in that I linked. They offer the ability to basically extract the positioning of that signature PDF tag using the name that was set up in the PDF (so "signature1" in this case), and use that to create the signature block for the eSigning ceremony.
DocuSign is another potential vendor we may integrate later this year, and we wanted to see if similar functionality was available. If so, this would reduce the step of our clients from having to convert X/Y coordinates and height/width when switching to a different vendor.
Yes, you can do that.
There are two ways to do that, depending on the original PDF.
One approach - using anchor tags. That would mean you look for certain words or digits (like "sign here") in the document. You position the tabs this way and you can later make an API call to determine their X/Y values
The other approach is using PDF Form Field Transformation where the PDF has meta-data that DocuSign can use to determine where to place tabs.
Again, using the same API calls, you can query the document for the resulting X/Y coordinates.
(I work for DocuSign)
Re your comment about "vendor agnostic" esignatures. That's a nice idea in theory but can lead to sub-standard solutions for the end customers. For example:
DocuSign offers built-in Responsive signing which greatly improves the usability of the signing ceremony on mobile and tablet devices. A "generic" eSignature integration that leaves out the capability enforces a more difficult user experience for the signers. DocuSign has many other features like that.
Most ISVs have competitors, and if you're adding eSignatures to your application then probably your competitors are too. In these cases, we've seen ISVs tightly integrate their application with multiple eSignature features. Later, when their ISV competitors add eSignatures too, the second ISV must either match or exceed the first ISV's eSignature integration. Otherwise, the second ISV is not competitive in the marketplace.
Bottom line is that a lowest common denominator solution can end up as a non-competitive solution.

PageSpeed Insights number of distinct samples to show data for a URL logic

I'm reading the PageSpeed Insights documentation and am wondering if anyone knows how Google is determining what is considered a sufficient number of distinct samples per this FAQ:
Why is the real-world Chrome User Experience Report speed data not available for a URL?
Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that a URL must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of performance of the URL.
I'm building a report centered around Core Web Vitals data and realizing some URLs have few data points with CWV timings, and I'm curious exactly how Google is handling these situations. I've been searching through docs and articles, haven't found anything with a specific reference.
The exact threshold is kept secret, so that's why you won't find it documented anywhere. However, as a site owner there are a few things you can do to work around a URL not having sufficient data:
Use the Core Web Vitals report by Search Console, which groups similar pages together, making them more likely to collectively exceed that threshold.
Look at origin-level aggregations in PSI or the CrUX API. These include user experiences from all pages on the origin, so it's much less granular, but it gives you a sense of typical experiences overall.
Instrument your site with your own first-party Core Web Vitals monitoring. web-vitals.js can be integrated with your existing analytics provider to track vitals on all of your pages. If you're integrating with Google Analytics, you can link your data with the Web Vitals Report to see how your site is doing.
Use your site with the Web Vitals extension enabled to see the Core Web Vitals data for your own experience. Your local experiences may not be representative of most users, but this can be a great tool for validating expectations vs reality.
Use lab data as a proxy. For example, lab data from Lighthouse in PSI can tell you how a mobile user on a slow connection might experience your page. This should really only be used as a last resort when no other field data is available.

Google popular times in nodejs

Google provided the latest Api "Popular times" to get data regarding to the specific time a particular business or else is busy or popular. However it comes with python implementation.
Does anyone know a way to use the google popular times api inside a Node.js project?
https://github.com/m-wrzr/populartimes
this link gets you for a python code..how to include or how we can get api for node project
You could try the foot traffic API service BestTime.app, which also works with Node.JS. Unfortunately, it's paid, but they offer a free test account.
BestTime.app provides foot traffic data almost similar to Google Popular Times and Foursquare data, but with more functionality. You can also analyze and filter foot traffic data of multiple places in an area. So you can for example filter places and show only bars that are busy on Friday evening, or show only supermarkets that are quiet on Sunday morning.
You can view the data through their website tools (e.g. on a heatmap), or get the same data through their software API (Software API tutorial).
Integrating the API is really useful if you want to e.g. make consumer-focused apps/websites to inform people to which place they should and at what time.
In the picture above the BestTime.app Radar tool shows foot traffic data for popular attractions in New York City. On the left the a foot traffic prediction is shown for the whole day. The map is overlayed with a heatmap that indicates the predicted foot traffic intensity for current hour per place. Using the filters, in the right panel, you can narrow down your search by selecting for example only the quiet NYC attractions on Friday afternoon.
Disclosure: I work for BestTime.app

Trouble-shooting slow-loading documents from DocuSign

a customer representative suggested that I try posting these questions here.
We spent some time monitoring issues with DocuSign loading slowly. While it was now slow every time, when it was slow it seemed to hang up on a particular point in the process.
Below is a screenshot of a trace we ran in the browser and note the element which took 52 seconds to load. When loading was slow, it seemed to hang on this particular element. Could you offer any reasons as to why it could sometimes take 52 seconds or more to load this part?
We also have some other questions:
There seems to be continuous font downloading (2 or 3 meg in size) throughout the process of loading the page. This occurs each time. Why is this and can it be avoided?
Why do we sometimes see Seattle as the connection site when most of the time is Chicago?
We noticed that DocuSign asks for permission to know our location. Does this location factor into where the document is downloaded from? Is the location also used in embedded signing processes?
Thank you for your assistance.
Unfortunately, without a bit more detail I am not entirely sure I can tell you why the page was loading so slow. Is this consistent? If so is it always the same document (perhaps template?) where you see this slowness?
As for your other three questions:
In doing my own test and decryption of the web traffic via fiddler I show the fonts being rendered for each individual tag and not the entire document. This is most likely due to each tag having it's own attributes that can be set (font included).
DocuSign data centers are in Seattle, Chicago and Dallas. All DocuSign traffic can come from any of these three data centers as the system synchronously exists in all three locations. More info can be found here.
DocuSign geo-location is just used to leverage the location capability of HTML5 enabled browsers but the signers IP address is recorded either way. It has no impact on which data center the traffic comes from. It is also included in the embedded signing process. It can be disabled on a per brand basis in the Signing Resource File setting the node DocuSign_DisableLocationAwareness to true.

Resources