Vulnerabilities in io.netty package - security

I am fixing some vulnerabilities related to io.netty and I see that there are some reports that are wrong.
For instance, report CVE-2019-20445
{
"id": "CVE-2019-20445",
"status": "fixed in 4.1.44",
"cvss": 9.1,
"vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N",
"description": "HttpObjectDecoder.java in Netty before 4.1.44 allows a Content-Length header to be accompanied by a second Content-Length header, or by a Transfer-Encoding header.",
"severity": "critical",
"packageName": "io.netty_netty",
"packageVersion": "3.10.6.Final",
"link": "https://nvd.nist.gov/vuln/detail/CVE-2019-20445",
"riskFactors": [
"Critical severity",
"Has fix",
"Attack complexity: low",
"Attack vector: network"
],
"impactedVersions": [
"\u003c4.1.44"
],
"publishedDate": "2020-01-29T21:15:00Z",
"discoveredDate": "2022-05-12T20:17:26Z",
"graceDays": -804,
"fixDate": "2020-01-29T21:15:00Z",
"layerTime": "2022-05-12T20:15:38Z"
},
This reports that the io.netty:3.10 package has vulnerability problems, but when the links included here are analyzed: https://nvd.nist.gov/vuln/detail/CVE-2019-20445
You can see that this is a report about netty:4.* package and not netty:3.* package
There are 2 different packages: io.netty:netty:3.* (with classes in the org.jboss.netty namespace) and io.netty:netty:4.* (with classes in the io.netty namespace).
The report is mixing both packages. I think it's a problem with io.netty that has named 2 different packages with the same name (artefactId), they would have to be: packages netty3 and netty4, for instance.
Now I don't know how to fix this, since io.netty:netty:3.10.6.Final has no critical vulnerability, that vulnerability is in io.netty:netty:4.1.* (lower than 4.1.44)
Does anyone else have this problem?
I think that this is a problem in NVD

Related

Azure Spell not detecting spelling mistakes

I've written up a quick proof of concept console app to test out the functionality of the AzureSpell Cognitive Services product, however it doesn't seem to often detect obvious spelling mistakes.
Having experimented with recommendations through other SO answers, I've had limited success.
Even using the demo located at https://azure.microsoft.com/en-us/services/cognitive-services/spell-check/ produces no results.
For example, consider the following piece of text: "Currently growing my compny which is a UK based Online compny with clients across the world. Working since 2001 to help indivduals."
This produces no results. I've looked at regional settings, PROOF vs SPELL, character counts to no avail.
Has anyone had any success with this service, or, even better, does the above text snippet produce results for you?
Spell mode is working for me with your sample, see below:
The JSON result is:
{
"_type": "SpellCheck",
"flaggedTokens": [
{
"offset": 21,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.9264452620075305
}
]
},
{
"offset": 55,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.8740149238635179
}
]
},
{
"offset": 120,
"token": "indivduals",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "individuals",
"score": 0.753968656686115
}
]
}
]
}
Ok, so after a fair amount of trial I've had some success, which has solved some issues and created others. I've not been able to get a reliable result from Spell mode, but I have with Proof, however after adding a fairly short piece of text, it would again not report any results. Inspecting the API shows the text is encoded in the POST, removing both "%0D" and "%0A", line feed chars allows me to Proof long texts with success, which would be fine, however being UK based, lots of correct spellings are now flagged as incorrect as the PROOF mode is only available in the US. So, I've still been unable to solve getting a functioning SPELL result (which works for very short pieces of text). I understand the documentation states upto 130 chars for GET, but 10,000 chars for POST and my typical example POSTS are around 1,000 chars. Possibly a ticket with MS unless anyone has any ideas?

LUIS - understand any person name

we are building a product on LUIS / Microsoft Bot framework and one of the doubt we have is Person Name understanding. The product is set to use by anyone by just signing up to our website. Which means any company who is signing up can have any number of employees with any name obviously.
What we understood is the user entity is not able to recognize all names. We have created a phrase list but as per we know there is a limit to phrase list (10K or even if its 100K) and names in the world can never have a limit. The other way we are thinking is to not train the entity with utterances. However if we have 100s of customers with 1000s of users each, the utterances will not be a good idea in that case.
I do not see any other way of handling this situation. Probably I am missing something here? Anyone faced similar problem and how it is handled?
The worst case would be to create a separate LUIS instance for each customer but that's really a big task to do only because we cant handle names.
As you might already know, a person's name could literally be anything: e.g. an animal, car, month, or color. So, there isn't any definitive way to identify something as a name. The closest you can come is via text analysis parts of speech and either taking a guess or comparing to an existing list. LUIS or any other NLP tool is unlikely to help with this. Here's one approach that might work out better. Try something like Microsoft's Text Analytics cognitive service, with a POST to the Key Phrases endpoint, like this:
https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases
and the body:
{
"documents": [
{
"language": "en-us",
"id": "myid",
"text": "Please book a flight for John Smith at 2:30pm on Wednesday."
}
]
}
That returns:
{
"languageDetection": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"detectedLanguages": [
{
"name": "English",
"iso6391Name": "en",
"score": 1.0
}
]
}
],
"errors": []
},
"keyPhrases": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"keyPhrases": [
"John Smith",
"flight"
]
}
],
"errors": []
},
"sentiment": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"score": 0.5
}
],
"errors": []
}
}
Notice that you get "John Smith" and "flight" back as key phrases. "flight" is definitely not a name, but "John Smith" might be, giving you a better idea of what the name is. Additionally, if you have a database of customer names, you can compare the value to a customer name, either exact or soundex, to increase your confidence in the name.
Sometimes, the services don't give you an 100% answer and you have to be creative with work-arounds. Please see the Text Analytics API docs for more info.
Have asked this question to few MS guys in my local region however it seems there is no way LUIS at moment can identify names.
Its not good as being NLP, it is not able to handle such things :(
I found wit.ai (best so far) in identifying names and IBM Watson is also good upto some level. Lets see how they turn out in future but for now I switched to https://wit.ai

RESTful API design - naming an "activity" resource

When designing the endpoints for an activity resource that provides information on the activity of other resources such as users and organisations we are struggling with naming conventions.
What would be more semantic:
/organisations/activity
/organisations/activity/${activityId}
/users/activity
/users/activity/${activityId}
OR
/activity/users/${activityId}
/activity/users
/activity/organisations/${activityId}
/activity/organisations
There's not a generic answer for this, especially since the mechanisms doing the lookup/retrieval at the other end, and associated back-ends vary so drastically, not to mention the use case purpose and intended application.
That said, assuming for all intents and purposes the "schema" (or ... endpoint convention from the point of view of the end user) was just going to be flat, I have seen many more of the latter activity convention, as that is the actual resource, which is what many applications and APIs are developed around.
I've come to expect the following style of representation from APIs today (how they achieve the referencings and mappings is a different story, but from the point of view of API reference)
-
{
"Activity": [
{
"date": "1970-01-01 08:00:00",
"some_other_resource_reference_uuid": "f1c4a41e-1639-4e35-ba98-e7b169d1c92d",
"user": "b3ababc4-461b-404a-a1a2-83b4ca8c097f",
"uuid": "0ccf1b41-aecf-45f9-a963-178128096c97"
}
],
"Users": [
{
"email": "johnanderson#mycompany.net",
"first": "John",
"last": "Anderson",
"user_preference_1": "somevalue",
"user_property_1": "somevalue",
"uuid": "b3ababc4-461b-404a-a1a2-83b4ca8c097f"
}
]
}
The StackExchange API allows retrieving objects through multiple methods also:
For example, the User type look like this:
-
{
"view_count": 1000,
"user_type": "registered",
"user_id": 9999,
"link": "http://example.stackexchange.com/users/1/example-user",
"profile_image": "https://www.gravatar.com/avatar/a007be5a61f6aa8f3e85ae2fc18dd66e?d=identicon&r=PG",
"display_name": "Example User"
}
And on the Question type, the same user is shown underneath the owner object :
-
{
"owner": {
"user_id": 9999,
"user_type": "registered",
"profile_image": "https://www.gravatar.com/avatar/a007be5a61f6aa8f3e85ae2fc18dd66e?d=identicon&r=PG",
"display_name": "Example User",
"link": "https://example.stackexchange.com/users/1/example-user"
},
"is_answered": false,
"view_count": 31415,
"favorite_count": 1,
"down_vote_count": 2,
"up_vote_count": 3,
"answer_count": 0,
"score": 1,
"last_activity_date": 1494871135,
"creation_date": 1494827935,
"last_edit_date": 1494896335,
"question_id": 1234,
"link": "https://example.stackexchange.com/questions/1234/an-example-post-title",
"title": "An example post title",
"body": "An example post body"
}
On the Posts Type reference (Using this as a separate example because there is only a handful of methods to reach this type), you'll see an example down the bottom :
Methods That Return This Type
  posts
  posts/{ids}
  users/{ids}/posts 2.2
  me/posts 2.2
So whilst you can access resources (or "types" as it is on StackExchange), through a number of ways including filters and complex queries, there still exists the ability to see the desired resource through a number of more direct transparent URI conventions.
Different applications will clearly have different requirements. For example, the Gmail API is user based all the way - this makes sense from a users point of view given that in the context of the authenticated credential, you're separating one users objects from another.
This doesn't mean google uses the same convention for all of their APIs, their Activities API resource is all about the activity
Even looking at the Twitter API, there is a Direct Messages endpoint resource that has sender and receiver objects within.
I've not seen many API's at all that are limited to accessing resources purely via a user endpoint, unless the situation obviously calls for it, i.e. the Gmail example above.
Regardless of how flexible a REST API can be, the minimum I have come to expect is that some kind of Activity, location, physical object, or other entity is usually it's own resource, and the user association is plugged in and referenced at various degrees of flexibility (at a minimum, the example given at the top of this post).
It should be pointed out that in a true REST api the uri holds no meaning. It's the link relationships from your organizations and users resources that matter.
Clients should just discover those urls, and should also adapt to the new situation if you decide that you want a different url structure after all.
That being said, it's nice to have a logical structure for this type of thing. However, either is fine. You're asking for an opinion, there is not really a standard or best practice. That said, I would choose option #1.

Azure DSC error on initial sync generating mof

I have a custom DSC module which is class based. During the initial sync process the target machine tried to generate a MOF in C:\Windows\System32\dsc which results in an error - this causes the initial sync to report as failed, even though all the individual configuration resource tasks show as succeeded. The ones that are based on the resource who's MOF was not generated report as succeeded, but in fact have not executed at all.
This is the error:
{
"JobId": "4deeaf52-aa56-11e6-a940-000d3ad04eaa",
"OperationType": "Initial",
"ReportFormatVersion": "2.0",
"ConfigurationVersion": "2.0.0",
"StartTime": "2016-11-14T21:37:14.2770000+11:00",
"Errors": [{
"ErrorSource": "DSCPowershellResource",
"Locale": "en-US",
"Errors": {
"Exception": {
"Message": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"Data": {
},
"InnerException": null,
"TargetSite": null,
"StackTrace": null,
"HelpLink": null,
"Source": null,
"HResult": -2146233079
},
"TargetObject": null,
"CategoryInfo": {
"Category": 6,
"Activity": "",
"Reason": "InvalidOperationException",
"TargetName": "",
"TargetType": ""
},
"FullyQualifiedErrorId": "ProviderSchemaNotFound",
"ErrorDetails": null,
"InvocationInfo": null,
"ScriptStackTrace": null,
"PipelineIterationInfo": []
},
"ErrorCode": "6",
"ErrorMessage": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"ResourceId": "[tAzureStorageFileSync]CDrive"
}],
"StatusData": [],
"AdditionalData": [{
"Key": "OSVersion",
"Value": {
"VersionString": "MicrosoftWindowsNT10.0.14393.0",
"ServicePack": "",
"Platform": "Win32NT"
}
},
{
"Key": "PSVersion",
"Value": {
"CLRVersion": "4.0.30319.42000",
"PSVersion": "5.1.14393.206",
"BuildVersion": "10.0.14393.206"
}
}]
}
I have tried manually generating the MOF and including it in the module, but that didn't help (or perhaps I did it wrong). Even though this is a class-based resource I added the MOF with the name of the class in a \DSCResources\ className \ classname .schema.mof file. I note that the one generated in the C:\windows\system32\dsc folder includes the version number, which mine does not. Perhaps that's the problem.
After the failed initial sync, the subsequent consistency check does pass, and the MOF is created at the location mentioned in the error message.
The class itself contains a function that calls Import-Module Azure.Storage which is installed on the machine by a different DSC resource, and has been installed at the point of the consistency check, but (obviously) not at the point the initial sync starts. The resource that installs the module is marked as a dependency of the class-resource in the configuration, but I think MOF generation must happen at the point the modules are deployed which is logically before the initial sync has run.
At least that's what I think is happening.
Would be grateful if anyone could instruct me on what can be done in this instance, and whether my assumptions (above) are correct? I can't seem to get any additional errors or telemetry from the MOF compilation process itself to see why the MOF compilation is failing.
#Mods I'm really not clear on what basis this would be downvoted - I don't think asking a question nobody can answer is really grounds for "punishment".
Posting an answer as nobody really had anything to contribute here and I appear to
have solved it on my own. I believe the issue is a matter of timing. The DSC dependent modules are delivered from the pull server and compiled before any of them are executed. The dependency of my class module on Azure.Storage meant that the .psm1 file couldn't be compiled (since the module didn't exist on the machine yet - it would be devlivered via a DSC resource at a later time).
Perhaps there is some mechanism that accounts for these dependencies in PS-based modules, or there is some leniency applied that isn't the case for class-based resources. That's still not clear.
After some experimentation I have begun creating and shipping the MOF files alongside the psm1 and psd1 file, rather than in the DSCResources... child folder as outlined in my question, and this appears to have resolved the issue.
Hopefully this helps someone and doesn't attract more downvotes.

gitlab api How to get Last Commit? logs_tree?

gitlab api How to get Last Commit?
GET /projects/:id/repository/tree
{
"name": "assets",
"type": "tree",
"mode": "040000",
"id": "6229c43a7e16fcc7e95f923f8ddadb8281d9c6c6"
},?
How to get logs_tree? Last Commit?
Since at least version 12.10 GitLab supports pagination.
That's why the call return only one commit.
GET /api/v4/projects/:id/repository/commits?per_page=1&page=1
At the current version of API we have only one way to solve this problem
GET /api/v4/projects/:id/repository/commits
The first commit in the array will be the desired one. You can extract it with jq '.[0]'
You can use this API.
GET /projects/:id/repository/branches/:branch
The result of this API includes the latest commit of the branch.
https://docs.gitlab.com/ee/api/branches.html#get-single-repository-branch
I would recommend follow the spec listed here which says that you can use
GET /projects/:id/repository/commits/tree
to return the following example data:
{
"id": "6104942438c14ec7bd21c6cd5bd995272b3faff6",
"short_id": "6104942438c",
"title": "Sanitize for network graph",
"author_name": "randx",
"author_email": "dmitriy.zaporozhets#gmail.com",
"created_at": "2012-09-20T09:06:12+03:00",
"message": "Sanitize for network graph",
"committed_date": "2012-09-20T09:06:12+03:00",
"authored_date": "2012-09-20T09:06:12+03:00",
"parent_ids": [
"ae1d9fb46aa2b07ee9836d49862ec4e2c46fbbba"
],
"status": "running"
}
This is the latest commit. In terms of finding logs_tree the full documentation may help you

Resources