Fingerprint browser and drive visitors - browser

Excuse me in advance, if you have submitted the application or the problem is not divided Please help solve this problem Or my topic is moved to the appropriate section I have an Arabic chat site as a wonder chat Chat is a script
During my search for the most powerful methods of banning annoying visitors within the chat and found these files bearing the name of the browser fingerprint Link to the Fingerprint file to make a distinctive fingerprint of the browser.
https://cdnjs.cloudflare.com/ajax/li...rprint2.min.js
Working file idea:
The basic idea of ​​the file is to make a distinctive imprint for the browser to distinguish members on the site even if the member changed his name and change his IP, and also the file can fetch a lot of information through the browser of the member such as the version of the browser and state and city and the private Internet company used by the member. The only problem we have now is how to use the file to bring the fingerprint of the browser to the member and fetch the basic data from the browser such as the state, city and its Internet company, to store this data on the datapize and use it to protect the site and chat from spam and annoying members.
Thank you for your presence.
Site Link:
https://www.3a-chat.com/chat

Unfortunately the library URL in your question does not work, but I would recommend using this existing solution and extending it a bit. For example you may add:
pixel ratio window.devicePixelRatio || 1
languages (navigator.languages || []).join(',')
math precision ${((Math.exp(10) + 1 / Math.exp(10)) / 2)}${Math.tan(-1e300)}

Related

google play transfering data via link to app

[context part] I am currently on vacation in austria, and decided to buy some tickets for the zoo in insbruck. After typing in my data... a link to their app in the google play store. And after downloading it, google play didnt show the usual open button, to open the app, rather it showed a button saying "continue" and after clicking it, the app opened with my data already in place
so here is my question: how may they have done it, and how can i implement it in my own app / website
thanks a lot in advance, would be nice if there was a easy way of doing this
EDIT: idk, may they have shared the session-token somehow, used the ip of my mobile?
You can achieve this using Firebase Dynamic Links
this is done by Add an intent filter for deep links
As with plain deep links, you must add a new intent filter to the activity that handles deep links for your app. The intent filter should catch deep links of your domain, since the Dynamic Link will redirect to your domain if your app is installed. This is required for your app to receive the Dynamic Link data after it is installed/updated from the Play Store and one taps on Continue button. In AndroidManifest.xml:

How to change Share text on Invite Family and Friends

I am looking to re-brand the Mesibo app https://github.com/mesibo/messenger-app-android/
Mesibo docs says we can change everything. However I am unable to find a way to change small things where it says "Mesibo".
For example on the Invite Family and Friends screen, it shows Mesibo name its links to download Mesibo app. Which of course we need to change to ours.
How can we do that?
Project is compiled and runs all good.
I guess you already find by yourself. But if not, this information and more comes from backend API server code. Check out API server code: https://github.com/mesibo/messenger-app-backend/blob/master/api_functions.php and change response text, or you can overwrite Mesibo server answer in application.
$invite['text'] = 'Hey, I use Mesibo for free messaging, voice and video calls. Download it from https://m.mesibo.com';

Share via FBSDKShareDialog ignores applink defined on target page

This has been driving me nuts all day:
I have an iOS app with a custom URL scheme defined, and am trying to share a link on FB which points to a page that has this scheme in its applink meta tags, so that tapping it should fire up my app.
Every little piece of it is working just fine. I can enter my URL scheme in safari on the phone and the browser launches my app. I have tested my webpage with the FB debug tool and there are no errors or warnings - it correctly identifies all the meta tags.
If I share the link using FB on the phone or on my laptop, all works fine.
HOWEVER, if I share the exact same link using FBSDKShareDialog, it does not work. It just opens the web page with the meta tags as if it was any regular web page.
Anyone has any idea why these two ways of sharing would be different? They look exactly the same otherwise.
If anyone else runs into this problem, here's the reply from FB:
When you share with mode automatic, the app does a fast app switch over to the FB app to show the native share dialog
The post is cached locally on the device, and it does not know about app links (since only Facebook server side knows about it)
When the user opens the FB, the user sees their cached story (with no app links behavior),
This doesn't manifest with the Web mode since the Facebook app needs
to pull from the server to get the post, in which case it has all the
app links info.
This is unlikely something that we'll fix. However, after a while, the
cache will expire, and Facebook app will re-pull the posts from the
servers, in which case the app link data will be available.
In order to test this, you can share the post on one device, and then
try clicking on the post from another device. The app links should
work at that point.
Which is kind of a lame response IMO - they parse the target page to build the preview, how hard would it be to remember the applink and use it?
There could be two possible issues:
Either the one told by #NJ, i.e. you are just trying to open the link in Facebook app, using the same device from which you posted the link.'
Solution - either open link in other device or cose and re-open your facebook app and do multiple refresh
Or You have some error in your meta tags. There is one important thing though, that Facebook never mentions, i.e. they cache the URL you provide.
So any one used the web link with meta tags the first time in Facebook, Whole meta tags will be cached, and you updated meta tags won't be parsed by facebook.
Solution
To get over with the issue, use below link
Facebook debug tool
Input your meta data included web page URL and
-click on show existing scrape information to find any error
Click on Fetch new scrape information for refreshing your URL on facebook. it will clear the cache for that URL in facebook server.

How or where do music lyric sites get their data?

There are tonnes of music lyric sites out there. A while back, I was looking at some lyrics for a band I am in to. And it made me think, "How does this site obtain all these lyrics and how can I get my hands on something like this?" Could not find much back then, so I decided to write a program that would basically parsed a site for the bands information and lyrics and placed the data in a database that I created.
But I am still wondering how these sites get their data? My way is not very efficient, very site specific, and if the site changes its' script structure, I have to change my parsing program. There must be a simpler way.
Anyone's thoughts are greatly appreciated!
I'd guess at either JSON or XML files. To 'get your hands on it' - there are various ways and means of downloading data from a web site. wget is one means, not that I condone it but it's hardly a secret
Most of the website get their lyrics from users. Musixmatch for example, they allow user to create their lyric if the lyric do not exist in their database. When then user create a lyric, it would probably be automatically saved into musixmatch's database. There are tons of lyric websites allowing users to upload lyrics.
Another way websites get their data is through data mining, just like you said, writing a parser/scraper to go through someone else's website.

How I do to block Web scraping without blocking Well behaved bots?

I'm building an e-commerce website with a large database of products. Of course, is nice when Goggle indexes all products of the website. But what if some competitor wants Web Scrape the website and get all images and product descriptions?
I was observing some websites with similar lists of products, and they place a CAPTCHA, so "only humans" can read the list of products. The drawback is... it is invisible for Google, Yahoo or another "Well behaved" bots.
You can discover the IP addresses the Google and others are using by checking visitor IPs with whois (in the command line or on a web site). Then, once you've accumulated a stash of legit search engines, allow them into your product list without the CAPTCHA.
If you're worried about competitors using your text or images, how about a watermark or customized text?
Let them take your images and you'd have your logo on their site!
Since a potential screen-scaping application can spoof the user agent and HTTP referrer (for images) in the header and use a time schedule that is similar to a human browser, it is not possible to completely stop professional scrapers. But you can check for these things nevertheless and prevent casual scraping.
I personally find Captchas annoying for anything other than signing up on a site.
One technique you could try is the "honey pot" method: it can be done either by mining log files are via some simple scripting.
The basic process is you build your own "blacklist" of scraper IPs based by looking for IP addresses which look at 2+ unrelated products in a very short period of time. Chances are these IPs belong to Machines. You can then do a reverse lookup on them to determine if they are nice (like GoogleBot or Slurp) or bad.
Block webscrapers is not easy, and it's even harder trying to avoid false positives.
Anyway you can add some netrange to a whitelist, and don't serve any captcha to them.
All those well known crawlers: Bing, Googlebot, Yahoo etc.. use always specific netranges when crawling, and all those IP addresses resolve to specific reverse lookups.
Few examples:
Google IP 66.249.65.32 resolves to crawl-66-249-65-32.googlebot.com
Bing IP 157.55.39.139 resolves to msnbot-157-55-39-139.search.msn.com
Yahoo IP 74.6.254.109 resolves to h049.crawl.yahoo.net
So let's say that '*.googlebot.com ', '*.search.msn.com ' and '*.crawl.yahoo.net ' addresses should be whitelisted.
There are plenty of white lists you can implement out on internet.
Said that, I don't believe Captcha is a solution against advanced scrapers, since services such as deathbycaptcha.com or 2captcha.com promise to solve any kind of captcha within seconds.
Please have a look into our wiki http://www.scrapesentry.com/scraping-wiki/ we wrote many articles on how to prevent, detect and block web-scrapers.
Perhaps I over-simplify, but if your concern is about server performance then providing an API would lessen the need for scrapers, and save you band/width processor time.
Other thoughts listed here:
http://blog.screen-scraper.com/2009/08/17/further-thoughts-on-hindering-screen-scraping/

Resources