I have a page that shows merchant calendar
https://staggingv2.tappio.me/c/sami/Tappio-Meeting
Now I want to build functionality that allow merchant to embed this page on their own website using script. The Same feature is provided by calendly some like this. and I want to implement this feature as well in my website.
I have very less knowledge about webpack config. I have go through this but there is not much guidance in this answer. I know I have to build this page to public directory but how i will achieve this just for single page?
Can anyone provide me pathway to achieve this functionality?
<!-- Calendly inline widget begin -->
<div class="calendly-inline-widget" data-
url="https://calendly.com/s-m-sami125" style="min-
width:320px;height:630px;"></div>
<script type="text/javascript"
src="https://assets.calendly.com/assets/external/widget.js"
async></script>
<!-- Calendly inline widget end -->
This is the page which is embedded in external website just by above script.
https://calendly.com/s-m-sami125/test?month=2022-12
You can see that in sandbox
https://codesandbox.io/s/eloquent-chaplygin-lxssxo?file=/index.html
I think this is not a webpack question. At first glance most developers would recommend iFrame. You can simply generate the calendar on your NextJs server without any menu and frame, then share the JS code with your merchant to embend it. If you choose this way you may facing problems with design (maybe the target page have different design than your page) and Cross-Origin Resource Sharing (CORS) issues. CORS can be solved if you have admin rights on your server (I assume you have) but If you allow access to it for every source this should be a security problem.
I think providing an API is a better solution. You can make API in NextJs easily and your merchants display data as they want in any design. Calling an API is easy from any framework nowdays.
Related
I'm building a Shopify embedded application with two main points of functionality:
Upon install, product meta fields are populated with some values
Upon page load, a custom script is injected into product page through shopify ScriptTag api
The injected script displays some icon alongside the values from product meta fields.
Currently, from the product page, the injected script is having to request the meta fields from my local server, which then queries the clients shopify for before sending back the response.
Is there a way to access the meta field values directly from the product page, without having to do the above?
Thanks in advance.
The injected script only fires at page load on frontend, while accessing metafields is only possible via Liquid or Shopify API. The data flow that you have right now is the standard way of doing things in Shopify in such cases. However, considering performance implications or for whatever reasons, if you still want to achieve this, you can make use of Liquid.
Doing so can be done in 2 ways.
Provide a Liquid code snippet
Use Shopify API to add Liquid code snippet on App installation
Liquid Code Snippet
Once a user installs your app, provide them a Liquid code snippet to integrate in to their theme. That liquid code snippet should expose the Meta Fields to some JavaScript variable, that your injected script will read.
Shopify API to add Liquid code snippet
If you don't want the users to integrate the Liquid code snippet manually, then on Application install, make use of Theme Assets API to add you Liquid code snippet to the clients' active theme. This will need additional App permissions from users on install. Also factor in different available themes and removing code snippet from theme when App is uninstalled.
You haven't mentioned the resource where you will be creating metafields, but sample Liquid code snippet should look something like
<script>
var customMetaField = {shop.metafields.namespace.fieldname}
</script>
In your custom App script read the variable customMetaField. This is just a rough idea, you will need to check if the metafield namespace and the metafields exist and then output the values accordingly.
Shop Metafields
My issue: For my thesis I am creating an auction site. I have an admin panel in which I would like to have some configurations so that an admin can specify that if there are 10 days before the end of an auction some components should be displayed in different ways, some should be not visible at all etc. That’s what I call dynamic presentation.
My question: Right now I am working on architecture and wondering if SSR can be helpful in any way? I am already aware that it can shorten download time of some collections from my database even by half, but I am wondering if there is any way how it can be helpful with dynamic presentation itself?
What I already know: I have read all about advantages and disadvantages of ssr or universal rendering in react. Now I am only wondering if it can be in any way helpful with dynamic presentation or it won't matter if I choose SSR or CSR.
Small side question: I don’t have the whole architecture ready yet. What I know is that I would like to have a database, one separate app for an admin, backend and frontend (either ssr or csr). My first thought on how to manage this dynamic presentation was to store some rules in the database. Then the rules could be configured in admin app should an admin want to change anything. The rules should be send to backend and calculated with some additional data from frontend. Then backend could send some flag to frontend indicating which components to display etc. In theory I could move calculating to e.g. NodeJs server should I go with SSR. What I'm wondering about is; can you think of any better way to handle dynamic presentation? What I am most afraid of is numerous ifs in the fronetend. I would like to have some more elegant solution but I have no other idea so far. For some time I thought about a scoring system but I believe it would be too complicated (instead of sending a flag, send a score and frontend will display correct things based on the score). Also it wouldn’t solve the issue of ifs on the frontend.
I am aware that on StackOverflow questions which can be answered rather than discussed are preferred but I am really stuck and would appreciate help.
Basically SSR can provide some speed on your page because all of your data will not be trying to be fetched when the react script will end with an API call. Data are fetched from database when page is requested and be passed to the component to render with the script.
Also another very basic advantage and the reason why everyone are going the SSR way is SEO. You cannot achieve SEO page with react CSR. This is because google bot etc will try and crawl your page without even render it. Is like trying to "view source" of a page. When you are in CSR the page has no content only the initial react divs empty. You need SSR to have data on the first request of the user.
SSR brings the data on the first request of the user until a reload. In the meantime react router fetches data from the api.
Let me know if that help you.
PS: also a helpful link https://medium.com/walmartlabs/the-benefits-of-server-side-rendering-over-client-side-rendering-5d07ff2cefe8
I am currently beginning to look at creating Chrome Apps and have followed a few of the basic tutorials now. I am happy with the basics so far except for one thing.
All the sample code and tutorials only seem to have one html file in the package, but what if I want to take a web app I have that uses more than one HTML page and turn it into a Chrome App?
How to I get the Chrome App to change from the index.html to another html page when I want to show some other html? I have tried using the standard html anchor tag along with the target set to _blank or _self, but it will only open a URL on the internet in a browser rather than changing the page in my application.
I am not from a web development background, so am I missing something basic to do this?
The simplest version of what Vincent Scheib said:
index.html
...
<div id="screen1" style="display:block">
...
</div>
<div id="screen2" style="display:none">
...
</div>
main.js
...
// A navigational event happens:
document.getElementById("screen1").style.display = "none";
document.getElementById("screen2").style.display = "block";
...
Packaged apps intentionally do not support navigation. Apps are not in a browser, there is no concept of forward, back, or reload. Applications which do require the concept of navigation, should use a user interface framework that supports that functionality. E.g. by manipulating the DOM, using CSS, or using iframes to animate and control visibility of components of your app.
how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.
This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.
You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.
I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D
I have what I thought was a simple(ish) problem. I'm writing a SCORM package for an external learning resource. It's basically an iframe in a HTML page that the clients install in their LMS (Learning Management System).
The external resource needs to be able to tell the LMS that the user has completed the content. Because the LMS and resource are on different domains, there's obviously a JS security wall stopping me communicating directly. So when the user reaches the end of the content, the external resources sets its URL to have an anchor so the url goes from http://url to http://url#complete
Now I'm trying to get the location from the iframe and I'm failing miserably. I've tried iframe.location and iframe.window.location (.window is nothing too). I can't seem to get a handle on the right thing.
iframe.src shows me the original source URL, but it doesn't change when the iframe updates to the #complete version.
Any tips? Any alternatives? Although I control both pages, unless there's a javascript method to set cross-domain communication, I can't set the http header to allow it because I don't control the LMS server - it just pushes out my static page.
Edit: As an alternative, I'm considering storing the completed event in the session (a cookie would work too, I guess) at the resource end and then making another page that outputs that as a JSONP statement. I think it would be quite easy to do but it's a lot more fuss for something that should be simple. I literally need to flip one switch on the LMS code from the external site.
Use easyXDM, it should make this fairly easy.
Using it you can do cross-domain RPC with no server-side interaction. The readme at github is pretty good.