After some time spent scouring google and looking through the Sonar Qube API documentation, along with trying a few permutations on common patterns, I have arrived to the point where I am wondering if it is even possible to use multiple parameters when doing an issue search in SonarQube's API.
Purpose of search is to populate a team radiator with issue data from Sonarqube. This data will be combined with build data from other sources (Or else I would just link to the SonarQube display page)
current configured URL to api is:
https://sonarqubesitehere.com/api/issues/search?=projectKeys=com.projectnamehere
(This is dummy code with names changed to protect the innocent)
I would like to be able to add a second parameter to this search that allows me to receive only major (or minor) issues that belong to the specific project I specify. the search term for that search is /search?severities=MAJOR
Anyone wrangled with this particular problem?
Please check the WS API on sonarqube.com or your own instance like: https://your-sonarqube.com/web_api/api/issues/search.
Here's an example of api/issues/search with several parameters
Hummmm... Provided that you read the Web API documentation for /issues/search, and that you know how to correctly write a URL that uses parameters, then it's quite easy to find that the solution is the following:
https://<your_server>/api/issues/search?projectKeys=project1Key,project2Key&severities=MINOR,MAJOR
Live example on SonarQube.com: https://sonarqube.com/api/issues/search?projectKeys=clang,git&severities=BLOCKER
Related
I am prototyping a Shopware App right now, where I want to extend the search with our search API. We already have a working plugin in the store for that.
I found those two references for hooks:
https://developer.shopware.com/docs/resources/references/app-reference/webhook-events-reference
https://developer.shopware.com/docs/resources/references/app-reference/script-reference/script-hooks-reference
Seems like there is no webhook for the search at all and just a script-hook for a finished search. In the plugin, we could just extend the ProductSearchRoute and be completely flexible.
Are search extension not planned right now?
Cheers,
Tobias
I assume you want to alter the criteria for fetching the products. As of today this is not yet possible with non-self-hosted apps. You could use the app scripts to enrich or replace the contents of an already loaded page as you already mentioned. Obviously that comes with some drawbacks regarding performance. The capabilities of apps are being enhanced continuously though so there's chance search manipulation might become possible rather soon.
My Problem
I want to be able to migrate my Google Docs to a regular website while maintaining the links I had created between my Google Docs. Frequently I link one Google Doc to another Google Doc. As a result, I have created something that is similar to a wiki. For example let’s suppose I had created two Google Docs: Google Doc #1 and Google Doc #2.
Subsequently let’s suppose I had created a link (a hyperlink) in Google Doc #1 to Google Doc #2. Of course that's an extremely simple example. Let’s make it more complex. Imagine I had created a couple of thousand Google Docs with many links (hyperlinks) between them.
Of course backing up those Google Docs would be trivial either by using Google Takeout or rsync. However, what would happen if I wanted to move those Google Docs to a regular website? Then the myriad hyperlinks I had created would fail to point to the documents on my regular website.
That is, on my regular website, if I were to click on the link on the page which contained the contents which had been contained in Google Doc #1 (https://my_regular_website.com/google_doc_001) then instead of opening a link on my regular website to the page which contained the contents which had been contained in Google Doc #2 (https://my_regular_website.com/google_doc_002) , the link would point to the original Google Doc #2 (https://drive.google.com/drive/folders/google_doc_002)
My Technical Question
I read that, “You can use the 'contentRestrictions.readOnly' field on a `file' resource to lock a file and prevent modifications to the title, uploading a new revision, and addition of comments.” Source: Protect file content from modification
However, I would like to prevent modifications to the title file yet allow the contents of the file to be edited. For example, I might name a file something like, “1cn2OX4U67mY925GzG80hRBYjpqq2conSi9xgYikgwIM” which is the unique portion of a Google Docs URL.
That way, on my regular website, by using a simple regex, I could “relink” documents that pointed to https://docs.google.com/document/d/1cn2OX4U67mY925GzG80hRBYjpqq2conSi9xgYikgwIM
Final Thoughts
I like using Google Docs as, dare I say it, a word processor. Sometimes I use Google Docs to write essays. Sometimes I use Google Docs to create documentation. Sometimes I use Google Docs to collaborate with others (instead of emailing). Furthermore, I often use Google Docs’ outline format, styles, and voice typing.
Sure, I suppose I could use an actual wiki. But although I’ve tried many different wikis over the years, I never enjoyed using them. I found them to be clunky and overly simplistic. Furthermore, I didn’t enjoy installing them and needing to back them up. At this point In time, I don't want to have to install and maintain any software on a VPS (virtual private server).
I checked the documentation you are referring to and what you are trying to achieve is not possible, making a document read only will prevent a new revision of the file to be created.
For instance that won't allow you to change comments, content and title. At this time it is not possible to prevent some modifications, just all or none.
Regards.
Thank you for giving me a piece of your time. This question really isn't a "how to", but more of a "is this possible or am I just insane?". I've recently looked at some portfolio pages and found a really great idea from https://flexdinesh.github.io/, but in the "portfolio" section instead of having just the characteristics of the project, is it possible to somehow use the Github API or some other equivalent to extract and present data like number of commits (or the table that github shows on your project page) and what project type it is (i.e. Java, Javascript, etc.) and maybe even some more related information. For background I am using React with Node.js. Again, this is probably useless to everyone out there, but I think it could be something cool if A) it actually exists, and B) it's not too much of a pain to implement. I've tried reading up on the documentation from Github, looking online, and looking at different source code, but no luck there. If anyone has any information or feedback I'm always open to help!
Thank you and have a good day
from what I understand, you want to display statistical information about the projects on your portfolio website.
Github provides an API that can get almost all the information you see on their website
so, to get all the languages being used in repository you can do a GET request on https://api.github.com/repos/:owner/:repo/languages,
To get the number of commits you can do a GET request on https://api.github.com/repos/:owner/:repo/commits
and so on
By default, these will get you the data of the public repositories, if you want to display info from your Private repositories, you need to provide an authentication token with each request.
You can read more about the API calls available here
let me know if you need any more help
I might get flagged down by this question.. but still will give it a shot..
Since Google Site Search is going out of business and we are not interested in the free version of it - We decided to go with the Amazon Cloud Search option. The challenge though is - it is not straight forward. We have to build a crawler and there are some features that needs to be custom built.
I am trying to see examples where websites have used ACS and worked but i am not able to find anything good.. Have anyone tried using Amazon Cloud search for their Website search. Our website has around 15000 plus pages.
We are .net based solution - so i am thinking to write a crawler.. extract content on nightly basis and send it to Amazon. Would it be the right way?
ACS is based on Solr. If your site is under your control, i think the first step is extracting all useful content out and generate them into xml/json files, then use AWS CLI upload these documents to ACS. ACS has REST APIs to let you to get the query result. You need to define indexes before uploading them.
I want to develop an application that will visualize the recommendations of Google instant. It is for a course project and for now, I don't know much about web programming tools. What I wonder is that is it possible to retrieve that data from another web page. If you think it is possible and it is possible with which platform, could you please guide me to the correct direction?
Without more information on what you're actually trying to do, it's difficult to give a proper answer. From what I can understand, you just want a list of the auto-completed items from a Google search, to manipulate however you like?
In which case, using the highest-rated answer from here, you can use http://suggestqueries.google.com/complete/search?client=firefox&q=YOURQUERY to give you a JSON object which you can then manipulate to get the auto-complete results. The client= part is needed, but I haven't looked at various options you can put in there.
Personally, I've never used JSON before, so can't give you any help on how to go about parsing it, but you can find more information about it on the JSON website, and w3 website.
Will need to act like javascript or run a javascript engine OR a browser add on and communication with that add on.
What happens as you type is a javascript function is called. So you need to call this function in your own or mimic what it does. I guess it calls a web service/ web page form programamtically (ajax) with what you have typed. The server responds with the suggestions. Not very difficult as long as Google does not deny you if it realizes your not a browser. i think they like only 100 free API calls but you can google google about that.
Http Components in java will help calling the serice, with cookeis etc. You should use the dev tools on firefox to see what happens under the hood when you type in the google search bar and see the code.