Sharing of information between back end and front end developers [closed] - web

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
This question is not related to code or any bug. I have an organisation related query. I am a front end developer. I consume web API's developed by the back end developers in my company. The problem here is, they share it via postman. API's are segregated project wise in folders. Problem is, the nomenclature of the API as well as the functionality differs. This creates lot of confusion for me while consuming API's. secondly, There is no indication that whether the API is deployed on a server or not. So sometimes, I end up writing the code and realize that the specific API is not deployed yet.
My question is, how does the world do it? How is the communication between developers established with this specific domain? How can one overcome this problem?

I hope i interpret your question correctly:
One of the methods used in the industry is scrum (specifically daily stand ups) where you talk about the work you intend to perform that day. This will give the back-end guys an opportunity to tell you its not yet ready. It really depends on the culture in the company. Why are they writing endpoints not yet deployed, and if not deployed, how difficult is it for you to make them deploy them?
Another way is DevOps which many think of as scrum for the entire value chain.
These methologies are however not something you can dictate, but they arose because of the problem you are refering to.
In practice: You should probably ask to get another folder called "SafeToUse" or "ReadyForConsumption" in Postman and in this way you can clearly see whats on its way and whats ready.
I hope this answers your question. I can't recommend anything more specific not knowing the kind of work you perform - normally in my experience the front- and backend for a given project is developed with close communication.

Related

Which load testing method is better? API testing or full website testing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have an application that we implemented kind of a microservices type architecture. The application contains 6 services (6 Docker containers). I need to load test this application. As I don't have much experience in the testing field, I'm not sure which method to use.
Right now, I have used the Gatling Load testing application for the load test. Here, I record the testing script by start the recorder and wander around my application to record all routes. I have gone through most of the routes in that single recording in order to mimic a practical user. I thought, normally users use an application like this and I can load test with its 1000 times by editing the number of threads/users.
Later I read about API testing which we will focus on APIs. Loading each APIs with a heavy load. So, I'm confused that which testing method should I use? If we go for API testing, it will provide only how much we can scale for that particular API right? (Not sure)
Is there any issue with my method of load testing?
It depends entirely on what you hope to achieve...
If you're looking to validate that your entire application (code + production infrastructure) can handle a given load, then driving as though going through the full website is the right path.
However, if you're looking to see how a particular api scales or want to help developers explore the ramifications of changes, then you will probably want to just drive that API directly to avoid other limitations your system may have.

Distributed file system for linux [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've got a web app where I use plain file system for my custom logs - a lot of small files, I don't want to put that into db, that works for me quite well. But now I need to scale my app by using a load balancer in front, so I also need to keep those logs in sync between servers. Is there any reliable solution for such cases ? I know I could sync it by some OS means, or by scripting, but I'm thinking if there is any better solution for such scenarios? Is it the case for MongoDB usage or something more modern or is it better to keep it on file system as plain files ?
This questions is going to get you some heat since essentially your asking for our opinion. Ill be frank tho and wont argue with anyone since its just MY opinion. With web apps in my humble opinion, its always better to keep your data in a DB for scalability but also for analytical research. I know little about what your app does but its easier to write third party data apps that tell you how many of X or Y etc when its centrally stored in a DB. Since the app that gets said data can be anywhere. I know I probably wasted time with an argument but hey, hope I helped a bit.

Is there any effort towards a scraper and bot freindly Internet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a scraping project for a company. I used Python selenium, mechanize , BeautifulSoup4 etc. libraries and had been successful on putting data into MySQL database and generating reports they wanted.
But I am curious : why there is no standardization on structure of websites. Every site has a different name\id for username\password fields. I looked at Facebook and Google Login pages, even they have different naming for username\password fields. also, other elements are also named arbitrarily and placed anywhere.
One obvious reason I can see is that bots will eat up lot of bandwidth and websites are basically targeted to human users. Second reason may be because websites want to show advertisements.There may be other reasons too.
Would it not be better if websites don't have to provide API's and there would be a single framework of bot\scraper login. For example, Every website can have a scraper friendly version which is structured and named according to a standard specification which is universally agreed on. And also have a page, which shows help like feature for the scraper. To access this version of website, bot\scraper has to register itself.
This will open up a entirely different kind of internet to programmers. For example, someone can write a scraper that can monitor vulnerability and exploits listing websites, and automatically close the security holes on the users system. (For this those websites have to create a version which have such kind of data which can be directly applied. Like patches and where they should be applied)
And all this could be easily done by a average programmer. And on the dark side , one can write a Malware which can update itself with new attacking strategies.
I know it is possible to use Facebook or Google login using Open Authentication on other websites. But that is only a small thing in scraping.
My question boils down to, Why there is no such effort there out in the community? and If there is one, kindly refer me to it.
I searched over Stack overflow but could not find a similar. And I am not sure that this kind of question is proper for Stack overflow. If not, please refer me to the correct Stack exchange forum.
I will edit the question, if something there is not according to community criteria. But it's a genuine question.
EDIT: I got the answer thanks to #b.j.g . There is such an effort by W3C called Semantic Web.(Anyway I am sure Google will hijack whole internet one day and make it possible,within my lifetime)
EDIT: I think what you are looking for is The Semantic Web
You are assuming people want their data to be scraped. In actuality, the data people scrape is usually proprietary to the publisher, and when it is scraped... they lose exclusivity on the data.
I had trouble scraping yoga schedules in the past, and I concluded that the developers were conciously making it difficult to scrape so third parties couldn't easily use their data.

can I trust node.js? safety/stability issues [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Dont get me wrong people, let me clarify.
I would like to ask if I can trust node.js. I know its an amazing tool. But its a really young platform, to be honest. Should I start playing around with it (production, not just experimental use), or should I wait till it "grows up"?
Does it work fine on Windows? Because at the beginig it was not supported. Are there any stress tests that actually prove that its safe and can be trusted?
It demands to write a lot of code by hand, stuff that in other platforms are done by just one line of code. I know you are gonna say to me "that depends on your experience" . I agree, but does it worth "learning" node? What if its developing stops? Again, I'm only asking because its pretty young.
What of node's add-ons and modules are to be trusted about their safety/stability? There are so many out there.
Is it stable? And finally, what about node's interoperability? Does it work on every platform/browser? What about smartphones and mobile devices?
Again, dont get me wrong, I am not critisizing. I am just concerned because its pretty new, everybody is excited and I haven't see any cons, or safety/stability issues around.
Thanks
I don't understand why would anyone choose to use node.js to do backend: the statically typed code is easier to maintain and Javascript is not the best (a good?) language.
That said, there are situations, where it makes a lot of sense to have the same code running in the browser and in the back end. When you run into one of these, you will know. And then Node works just fine. We've had it in production for months exposing its functionality as an internal web service to our back end application and haven't had any problems with it.

Security evaluation during project management [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Generally speaking.
How do a project manager evaluate and track the security issue for a project? Or is there any online resource that I can use as a reference ?
I would say that you would track this like everything else you track on your project.
Make sure that there is an architecture and project requirement review -- go though all aspect of the architecture and design and document any issues and questions as you go along. Depending on your application, it may include securing external communication and communication between different parts of the application, and understand any possibilities for malicious user input. If your application store any data, review what data is stored, and ask "what would happen if the data was lost or leaked". Understand how all sensitive data store is encrypted, and make sure that user passwords are never stored (store a oneway hash instead). Review how/if any encryption keys can be rotated, so that loss/leak of key does not mean compromise of security.
Document all issues and questions found in your favorite bug tracking and task management tool, even if just as reminder to get back and inspect actual implementation.
I think you add them as 'risks' or 'tasks' in your ALM system, depending on which phase you are current in with respect to your project.
How to evaluate security issues is usually deferred first to Devs or IT Profs depending on the nature and then reported back to the PM for review.

Resources