I've been reviewing the DocuSign documentation to see if this feature is available through API. We currently work with one eSign vendor, OneSpan, who offers the feature of Position Extraction via PDF tags set up in the document (link below for reference). I'm curious if this same functionality is available in DocuSign and have been unable to find it within the documentation.
To give some background on the use case.. Our clients want to set up our documents with PDF tags and use those for creating eSign transactions. The goal of this is so they can be vendor agnostic since the eSign creation would rely on extracting the PDF tags as opposed to explicitly setting height/width and x/y coordinates.
OneSpan Link for Position Extraction:
https://community.onespan.com/documentation/onespan-sign/guides/feature-guides/developer/position-extraction
Edit: So just to clarify what process our clients are looking to do, and some background as well. Our clients have their applications which they use to call our gateway of APIs for creating eSign transactions. Our APIs take in a generic eSign request, which we then convert to the appropriate vendor's eSign request structure before sending it out for creation. Certain clients use certain vendors, which is why we take in a generic request and convert it depending which vendor that client is subscribed to use.
Our clients are migrating away from an old legacy eSign vendor whose X/Y origin begins at the bottom left and also renders the PDF differently during the signing ceremony. When trying to migrate to a new vendor, our clients are facing pretty heavy obstacles in converting the X/Y coordinates, and height/width, so that signatures, fields, etc. appear correctly in the document in the new eSign vendor.
We were trying to think of a way to avoid this kind of problem in the future if we were to ever switch eSign vendors again. One of the ideas we're looking into is setting up PDF tags (some vendors seem to use different terms like "text tags", "anchor tags", etc) in the document itself. So say we have a signature PDF tag with the name "signature1", and this is where the OneSpan Position Extraction comes in that I linked. They offer the ability to basically extract the positioning of that signature PDF tag using the name that was set up in the PDF (so "signature1" in this case), and use that to create the signature block for the eSigning ceremony.
DocuSign is another potential vendor we may integrate later this year, and we wanted to see if similar functionality was available. If so, this would reduce the step of our clients from having to convert X/Y coordinates and height/width when switching to a different vendor.
Yes, you can do that.
There are two ways to do that, depending on the original PDF.
One approach - using anchor tags. That would mean you look for certain words or digits (like "sign here") in the document. You position the tabs this way and you can later make an API call to determine their X/Y values
The other approach is using PDF Form Field Transformation where the PDF has meta-data that DocuSign can use to determine where to place tabs.
Again, using the same API calls, you can query the document for the resulting X/Y coordinates.
(I work for DocuSign)
Re your comment about "vendor agnostic" esignatures. That's a nice idea in theory but can lead to sub-standard solutions for the end customers. For example:
DocuSign offers built-in Responsive signing which greatly improves the usability of the signing ceremony on mobile and tablet devices. A "generic" eSignature integration that leaves out the capability enforces a more difficult user experience for the signers. DocuSign has many other features like that.
Most ISVs have competitors, and if you're adding eSignatures to your application then probably your competitors are too. In these cases, we've seen ISVs tightly integrate their application with multiple eSignature features. Later, when their ISV competitors add eSignatures too, the second ISV must either match or exceed the first ISV's eSignature integration. Otherwise, the second ISV is not competitive in the marketplace.
Bottom line is that a lowest common denominator solution can end up as a non-competitive solution.
Related
As a newbie to Liferay I am investigating whether I used the right approach to populate data forms.
When populating a form with a number of text fields using an external REST service, I alraedy implemented the following approaches:
Creating a web content page and fill it with Ajax. Works, but using Jquery is not my preferred (modern) approach for accessing REST services.
Creating a web page using an Angular/React portlet, etc. Also worked for me. As I understand, per page I have to create a small Angular (module) project.
Creating a Form page that retrieves the fields by a data provider. Worked also for me, but this was presentation only.
"Service builders" are used in the examples for working with databases. I did not use this method yet.
Below are a few questions but there only 1 theme: when using Liferay, what are the beste techniques to populate data forms?
Is there a better/easier approach in Liferay to fill the form?
When using option 3, will it call the data provider multiple times for each field?
What if I would like to post the populated form data to the REST service?
Which approach is best to fill a table with rows of data?
Can/should "service builders" be used for interacting with REST services as well?
Is there a way to interact with basic components and Angular? I could not find anything on the internet yet.
I believe that your question is still out of focus for stackoverflow, as it's asking for "the best" way - to that, the answer is a firm "it depends".
You're listing a couple of options yourself. What you choose will in the end depend on
the technology you know
the technology you feel comfortable maintaining long term
the business requirements - e.g. how flexible to you need to be? can you go low-code, or do you need the full flexibility to develop an actual app
how frontend-heavy are your requirements
how independent of Liferay do you want them to be?
That being said: All of your options are good choices in various situations that you need to solve.
My general recommendation is to think maintenance, and optimize for the future maintenance of your solution, rather than the initial implementation time.
But, unfortunately, no firm single answer.
[React/NodeJS] I'm having a huge struggle in finding the appropriate documentation on this topic -- I am looking to implement PayPal on my website, with the constraint being that the customer is charged after our offline service is completed (has variable costs, but there is something of a solution in mind such that if I can 'authorize' an amount equal to our maximum cost that we will be a-okay). My initial research indicated to me that PayPal Orders fulfill this desire, to at least an effective extent, ie. the order is placed and funds are not placed on hold until we authorize the charge, ideally after the offline service is completed (source: https://developer.paypal.com/docs/integration/direct/payments/orders/#order-response). Upon further inspection, I have discovered that the integration path using PayPal smart buttons is being heavily advocated in implementation docs and appear to be compatible with the orders API (source: https://developer.paypal.com/docs/checkout/).
I began working to implement the software, following the smart buttons implementation linked just prior, and followed the instruction to use server side api calls to process the payment (source: https://developer.paypal.com/docs/checkout/integrate/ and https://developer.paypal.com/docs/checkout/reference/server-integration/set-up-transaction/). Continuing forward, I pursued order creation explicitly using the orders API (mistake perhaps?) and used the docs for the v2 orders api and the docs for the nodeJs sdk package referenced in the paypal docs (paypal/checkout-server-sdk using the github docs). I set up the integration and the sandbox accounts showed that charges were being placed however, and this was contradictory to my desire to not place charges on hold until an authorization is completed. The status returned on the backend is kept at "created", so I was initially optimistic, but the charge placement was unfortunate.
I am struggling to find the next step. As the checkout-server-sdk is utilizing both payments/v2 and orders/v2 (source: https://www.npmjs.com/package/#paypal/checkout-server-sdk), I am lead to believe I can utilize those API endpoints as well, but can't find explicit functions that call the payments api in question in the checkout-server-sdk, which I believe I would need to change order intent in order to create an order (source: https://developer.paypal.com/docs/integration/direct/payments/orders/), but also noticed these docs (linked as active directly from the docs page for paypal I believe -- https://developer.paypal.com/docs/ ) post to payments/v1 (source: https://developer.paypal.com/docs/integration/direct/payments/orders/) which has been deprecated. Long story short, I am now lost and would thoroughly appreciate some guidance on where I walked astray, what docs to refer to, if this implementation is still supported, and potentially what the next step is. If I used incorrect verbiage or have some noticeable jump in logic that was to my detriment, I would love to know as I am fairly new to developer work as a whole. Thank you in advance!
Your use-case of not placing a temporary hold up front requires intent:order, and only the v1/orders API has this available. The v2/* APIs do not.
An intent:authorize hold typically clears from a card after about ~3 days (even though it is capturable up til day 29), so I would recommend using the v2 APIs if that's workable. But if it's important to not do that, then v1/orders can be used. The API is not going to disappear, people are using it. Even-yet-older Classic APIs with similar PAYMENTACTION=ORDER functionality are still in heavy use, after all.
I am still a relative newcomer to Expression Engine as a developer and a user. I am faced with the problem that a lot of my knowledge is being passed to me by users who have found ways to accomplish tasks traditionally undertaken by developers (eg product libraries) by using the channels system.
What I wondered was what people's views are on when it is best to advise a client to use this and when not to.
Let me use an example, a client wants a system which had venues where events could take place. The previous developer had chosen to use the membership system for the venues and the channels system for the events and write some custom code to attempt to knit the two together, specifically because there are not enough hooks to accomplish some background automated tasks like looking up the long/lat of the address of a venue when it is created or updated.
I am picking up after someone else's work largely but its not their fault, it was the information they were given as they were also new to the system.
In any other project this would be a master-detail type setup, events belong to venues, i'd probably write 2 custom tables, editors in the admin area via modules and then use regular custom code in the pages to display and act upon the info - this way, I could control what's happening when a user hits submit.
However, the instigating party is a veteran user of Expression Engine and instructed the previous developer in the manner of "oh, just put it all in the channels and then there's this tag and that tag and so on".
So, am I missing the point or am I right that this does not fit the channels system and when should you use channels and when not?
Thanks friends.
This question is very hypothetical and every developer will give you a different answer as it all depends on the requirement and how that EE developer rolls.
Fundamentally ExpressionEngine allows you to approach builds in many ways, none are right and wrong, albeit some are easier, some harder, others just plain daft.
Basically Channels are groups of data "entries" - these can be anything. Using your example, venues could be one channel with fields created relevant to the subject (e.g. location, size, price, etc). And another channel for events with different fields (e.g. date, type, location).
Mostly anything can be slotted into a channel. But member details are best held within the native member functionality (although there is a commercial add-on that holds member data in a channel).
You reference the previous developers approach - this could be because they used a third-party add-on that required the data to be held separately to channels, or a lack of understanding on best approach. Or just because the developer decided to approach it that way! I suspect the last developer then associated a member (venue) to an entry (event) to link the event to the venue. Basic EE functionality allows for related entries which allows you to associate 1 entry with another (e.g. Event -> Venue), or using the excellent Playa add-on, so this approach is really not necessary.
Personally I would always store the data in channels, and people/members in the native membership functionality (e.g. admin, visitors to the site, customers, etc). I'd only build an add-on (utilising it's own tables/data) to store additional information if it was way outside what EE could store.
To answer your practical question (it's stretching the scope of what Stack Overflow questions are supposed be honestly): you should use a channel for Venues and channel for Events, and the Venue field in the Event entry is a "Related Entries" fieldtype linked to the Venues channel. That's the "standard" EE way, and the most similar to a traditional database schema.
I am a student developer with Oregon State University's Business Solutions Group and I am currently working on a Salesforce integration project for one of the University's colleges. As you can imagine, the data we are working with is coming from several different places and in a variety of different formats. I was wondering if anyone with more experience in setting up Salesforce object schemas could talk about the pros and cons of relational database style normalization in Salesforce. What do we gain by not normalizing and using Record Types to categorize data? (For example: a Person-Account that encompasses Students and Faculty and uses Salesforce Record Types to differentiate between the two) What do we lose?
This message was inspired by this webpage:
Salesforce Guru: Record Types
Notice that the first thing it advises is to not normalize (overmuch) because it prevents us from taking advantage of some built-in Salesforce functionality. Overall, the page seemed helpful, albeit incomplete.
The answer to this question seems critical to the success of our project and will help us to decide how to reorganize the data we are initially migrating to Salesforce and ultimately build our Salesforce object schema, so any thoughts, additional resources or advice are very much appreciated. Thanks!
The inspirational web page is correct. With the "standard objects" like Account, Contact, Case, Lead, etc. and even with custom objects the system works best to use fewer tables (objects) and segregate the data based on some value (such as record type).
By using record types you leverage the point-and-click UI. For example, for the Account object you have a default page layout. But for each record type you can have a unique page layout. Furthermore the security model uses record types to limit or grant access as appropriate to different user profiles.
As the author says SOQL is NOT SQL.
Reading through the Flickr API documentation it keeps stating I require an API key to use their REST protocols. I am only building a photo viewer, gathering information available from Flickr's public photo feed (For instance, I am not planning on writing an upload script, where a API key would be required). Is there any added functionality I can get from getting a Key?
Update I answered the question below
To use the Flickr API you need to have an application key. We use this to track API usage.
Currently, commercial use of the API is allowed only with prior permission. Requests for API keys intended for commercial use are reviewed by staff. If your project is personal, artistic, free or otherwise non-commercial please don't request a commercial key. If your project is commercial, please provide sufficient detail to help us decide. Thanks!
http://www.flickr.com/services/api/misc.api_keys.html
We set up an account and got an API key. The answer to the question is, yes there is advanced functionality with an API key when creating something like a simple photo viewer. The flickr.photos.search command has many more features for reciving an rss feed of images than the Public photo feed, such as only retrieving new photos since the last feed request (via the min_upload_date attribute) or searching for "safe photos" only.
If you have a key, they can monitor your usage and make sure that everything is copacetic -- you are below request limit, etc. They can separate their stats on regular vs API usage. If they are having response time issues, they can make response a bit slower to API users in order to keep the main website responding quickly, etc.
Those are the benefits to them.
The benefits to you? If you just write a scraper, and it does something they don't like like hitting them too often, they'll block you unceremoniously for breaking their ToS.
If you only want to hit the thing a couple of times, you can get away without the Key. If you are writing a service that will hit their feed thousands of times, you want to give them the courtesy of following their rules.
Plus like Dave Webb said, the API is nicer. But that's in the eye of the beholder.
The Flickr API is very nice and easy to use and will be much easier than scraping the feed yourself.
Getting a key takes about 2 minutes - you fill in a form on the website and then email it to you.
Well, they say you need a key - you need a key, then :-) Exposing an API means you can pull data off the site way easier, it is understandable they want this under control. It is pretty much the same as with other API enabled sites.