How can I track issue from multiple different issue-trackers in a single place? [closed] - bug-tracking

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Like a lot of open-source developers I find myself interacting with maybe dozens of different projects' issue trackers - some for work, some as a hobbyist; some frequently, some more rarely; sometimes to report bugs, sometimes to contribute patches, sometimes to follow others' bug reports that affect me, sometimes to organize my own work on my own projects.
The problem is, this activity is scattered across different web apps (github, bitbucket, trac, bugzilla, mantis, jira, ...) on different projects all over the web and there's no one place to check status of issues I'm trying to stay on top of.
I want one dashboard kind of app where I can browse, search, and sort (by updated date, priority, etc) everything assigned to me, or any bugs I've reported, or any bugs I'm watching for updates - across all projects - without having to manually re-enter all those issues into the dashboard: I want to just feed it a URL to an existing issue in some other tracker and it'll track that issue's status for me.
You could almost get there with just an RSS feed reader, except to be really useful the app would need to know more about the relevant metadata so you can sort and filter as needed.
Has anybody built anything like that? Bonus if it provides write capabilities too, at least for common tasks like adding a comment, marking resolved.
I've never heard of any such thing, and I'm constantly wishing for it. If it doesn't exist I might have to take a crack at it.

I'm not sure if this will give you what you are looking for, but "undock" supports multiple issue trackers... wether it supports them all in one interface I'm not sure, but may be worth checking it out.
edit: yes, it does support them in one interface - the next question is whether you want to read them off a mobile device :-)
Anyway, hope it is of some use.

Related

Is there any effort towards a scraper and bot freindly Internet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a scraping project for a company. I used Python selenium, mechanize , BeautifulSoup4 etc. libraries and had been successful on putting data into MySQL database and generating reports they wanted.
But I am curious : why there is no standardization on structure of websites. Every site has a different name\id for username\password fields. I looked at Facebook and Google Login pages, even they have different naming for username\password fields. also, other elements are also named arbitrarily and placed anywhere.
One obvious reason I can see is that bots will eat up lot of bandwidth and websites are basically targeted to human users. Second reason may be because websites want to show advertisements.There may be other reasons too.
Would it not be better if websites don't have to provide API's and there would be a single framework of bot\scraper login. For example, Every website can have a scraper friendly version which is structured and named according to a standard specification which is universally agreed on. And also have a page, which shows help like feature for the scraper. To access this version of website, bot\scraper has to register itself.
This will open up a entirely different kind of internet to programmers. For example, someone can write a scraper that can monitor vulnerability and exploits listing websites, and automatically close the security holes on the users system. (For this those websites have to create a version which have such kind of data which can be directly applied. Like patches and where they should be applied)
And all this could be easily done by a average programmer. And on the dark side , one can write a Malware which can update itself with new attacking strategies.
I know it is possible to use Facebook or Google login using Open Authentication on other websites. But that is only a small thing in scraping.
My question boils down to, Why there is no such effort there out in the community? and If there is one, kindly refer me to it.
I searched over Stack overflow but could not find a similar. And I am not sure that this kind of question is proper for Stack overflow. If not, please refer me to the correct Stack exchange forum.
I will edit the question, if something there is not according to community criteria. But it's a genuine question.
EDIT: I got the answer thanks to #b.j.g . There is such an effort by W3C called Semantic Web.(Anyway I am sure Google will hijack whole internet one day and make it possible,within my lifetime)
EDIT: I think what you are looking for is The Semantic Web
You are assuming people want their data to be scraped. In actuality, the data people scrape is usually proprietary to the publisher, and when it is scraped... they lose exclusivity on the data.
I had trouble scraping yoga schedules in the past, and I concluded that the developers were conciously making it difficult to scrape so third parties couldn't easily use their data.

Any public datasets of authentication logs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
I'm looking for login logs, particularly from a mail server, to use as sample data. I've looked through a lot of public datasets but have been unable to find anything of the sort.
I'd like to be able to look at a log showing login attempts, with fields like time stamp, requester IP address, user account, success/fail, etc.
Any help would be appreciated.
I'm not surprised you did not find anything.
the information is sensitive, and should not be shared. For example someone might accidentially mistype their password as login name. In the logs this will show up as failed login, and the next login from the same IP might then be the appropriate username.
this data is of little use to anyone, in particular if it is historic or has been anonymized. Why would anyone bother to share this kind of data?
Furthermore, login (and attack, assuming that you are looking into that) patterns change over time. So any conclusion drawn from historic data will likely no longer hold.
So you'll have to get this data fresh, and under NDA from someone that trusts you to not misuse it. If it actually is of any use.
You might have better success if you would look for aggregated data. Temporal patterns of user activity vs. failed logins, for example. But in any way, this site is about programming questions, not "where do I get data". You'll have to use Google mostly.

Good tools to record network usage of a web application [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am about to start work on a project to reduce the bandwidth usage of a web application. We are going to implement several techniques such as delayed loading of Javascript files until they are needed to try and reduce the overhead of running the application.
The first thing I want to do is test the current state of things so we can create a baseline. Ideally we would then like to automate this testing so that we can track the network usage of the application as we make our changes.
Can anyone suggest tools which are good at doing this? At it's most basic the tool needs to be manually run but extra brownie points will be given for suggestions of how to automate the test!
Thanks in advance.
Firebug (especially the Network Tab will show you everything a page loads with times and sizes.
You might also take a look at YSlow.
I would just use the logs of IIS. It's possible to log to text files. Then you can analyze your data with this tool.
If you don't want to get fancy, you can log via ODBC to a database, and query with plain SQL.

IIS Log Analyzer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I want to analyze the IIS logs for a website for things like hits, keywords, countries accessed from etc.
Has anyone used any (free) tools that were useful from this regard?
There's LogParser. Blog article about how to use it here. You need to be comfortable with SQL to use it, though. There's a GUI for it apparently, too. Don't have any experience with that, though.
Nihuo web log analyzer is very simple, easy to configure and very good in analyzing iis and apache access log files. The reports generated by this tool are also very good.
You can use it freely with full function for 30 days evaluation period.
============================================
updated: The software is developed by myself.
There is a simple answer to this don't..
Log files are next to useless to look at your website traffic, there are massively inaccurate, log file analysis is useful for network engineers looking at traffic management.
If you want to view who has looked at your website from where and with which broswer and what keyword was used to get there, just install goggle analytic although it does have a few downsides its much better for the information you require its also free.
Take a look at http://www.googlelytics.net/awstats-log-file-analysis-vs-google-analytics/ for a view of each.

Can anyone suggest a small, simple and free bugtracker? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm looking for a small and simple (emphasis on simple) bugtracker for a small project. It should run on Apache/PHP, though I'll consider other alternatives too (no Windows though). Oh, and I don't have any money to spend on it, so it should be free. :P
Any recommendations?
Added: Please, no hosted solutions. I want to host it myself.
Trac. It is free, simple, and runs on Apache.
See the demosite to try it out yourself.
Written in Perl, but Bugzilla is really easy to setup. The installation is mostly done by the setup script.
Pivotal Tracker: http://www.pivotaltracker.com/
It's simple and is great for project management too. It's also hosted and free! No setup. You just need a login.
I really like Mantis: http://www.mantisbt.org/ . You can see it in action at http://bugs.scribus.net , for example.
There is much personal taste involved; this is just mine: I think Mantis is simple, still offers you quite a few features, but it doesn't bang you in the head with them. I find it very comfortable to work with.
TBH, I have never used Mantis as and Admin, just as a User / Reporter, but I do suppose that the ease of use continues into the lower level functionality.
FogBugz has a free, hosted version if you're working alone, or with one other person.
Roundup tracker: http://roundup.sourceforge.net/
It's free
It's open source
It has a built in webserver so can host itself, or do the apache thing
It can run on top of a database, or just files
It's written in Python and is insanely hackable if that's your thing
It has a vibrant community of people writing plugins - e.g wiki like issue editing
Checkout BugTracker.net.
It's easy to use and very much productive.
Check out the happy people in the town of Simplton.

Resources