I want store all the log that i put in application like,
What was input request and also what was output object.
Some interm states of about objects and about logic and output response in JSON and store
log.LogInformation("Final list of objects is" + JsonConvert.SerializeObject(finalSelectedItems));
Actually I want see latter on What was input request, what was different states of different objects and output result as I want this for debugging.
The system is dynamic and I can't replicate the scenarios when system doesn't behave as anticipated, So want see all information when system is not behaving properly
Related
I am not able to figure out what's the difference between data source block and output block in terms of functionality because both are used for getting information about that resource from the console like id, public_ip etc. Can anyone please help me in understanding this because I could'nt find out a suitable resource for this
I have tried to search online for this difference but couldnt find the actual answer.
data essentially represents a dependency on an object that isn't managed by the current Terraform configuration but the current Terraform configuration still needs to make use of it. Mechanically that typically means making a Get or Read request to a specific API endpoint and then exporting the data from that API response in the resulting attributes.
output represents is one of the two ways that data can flow from one module into another. variable blocks represent data moving from the parent module into the child module, and output blocks represent data moving from the child module out to the parent.
There is no strong relationship between these two concepts but one way they sometimes connect is if you use the tfe_outputs data source belonging to the hashicorp/tfe provider, or if you use the terraform_remote_state data source from the terraform.io/builtin/terraform provider. Both of those data sources treat the output values from the root module of some other Terraform configuration as the external object to fetch, and so you can use these as one way to use the results from one configuration as part of another configuration, as long as the second configuration will be run in a context that has access to the state of the first.
NOTE: I'm using the v2 SDK.
In my provider my 'resource' isn't a single API call.
My resource is actually multiple 'things'.
For example...
resource "my_resource" "example" {
foo {
...
}
bar {
...
}
baz {
...
}
}
The resource and each of the nested blocks are all separate 'things' that each have their own API calls.
So when 'creating' this resource I need to actually make multiple API calls. One API call to create the resource itself, then I need to make an API call to create a 'foo', then another API for 'bar', 'baz' etc. Finally, once those nested things are created I need to call my API one last time to activate my main resource.
The problem I've found is that if there's an error in the creation of one of the nested blocks, I'm finding the state is getting messed up and reflecting the 'planned' diff even though I return an error from the API call as part of the Create step.
I'm interested to know how other people are handling errors in a provider that has a structure like this?
I've tried using Partial(). I've also tried to trigger another Read of each 'thing' but although the final state data looks to be correct (when printing it as part of a debug run with trace logs), once I've done a read, because my 'Create' function has to return an error, the state data that's read is dropped and the original planned diff is persisted (I've even stopped returning an error altogether and tried to return just the result of the Read, which is successful, and STILL the state reflects the planned diff rather than the modified state after a Read).
Since you mentioned Partial I'm assuming for this answer that you are using the older SDKv2 rather than the modern Terraform provider framework.
The programming model for SDKv2 is for the action functions like Create to receive a mutable value representing the planned values, encapsulated in a schema.ResourceData object, and then the provider will modify that value through that wrapping object to make it describe the object that was actually created (or updated).
Terraform Core itself expects a provider to respond to the "apply" request by returning the closest possible representation of what was actually created in the remote system. If the value is returned without an error then Terraform will require that the object conforms to the plan and will raise an error saying that there's a bug in the provider if not. If the provider returns a value and an error then Terraform Core will propagate that error to the user and save whatever value was returned, as long as it matches the schema of the resource type.
Unfortunately this mismatch in models between Terraform Core and the SDK makes the situation you've described quite tricky: if you don't call d.Set at all in your Create function then by default the SDK will just return whatever values were in the plan, even if parts of it weren't actually created yet. To make your provider behave in the way that Terraform is expecting you'd need to do something like this:
At the beginning of Create, decode all of the nested block data into some local variables of data types that are useful for making the API calls you intend to make. For example, you might at this step decode the data from the ResourceData object into whatever struct types the underlying SDK expects.
Before you take any other actions, use d.Set to remove all of the blocks of the types that will require separate requests each. This means you'll need to pass an empty value of whatever type is appropriate for the type you chose for that block's value.
In your loop where you're gradually creating the separate objects that each block represents, gradually append the results into a growing set of objects representing the blocks you've already successfully created. Each time you add a new item to that set, call d.Set again to reset the attribute representing the appropriate block type to now include the object that you created.
If you get to the end without any errors then your attributes should now again describe all of the objects requested in the configuration and you can return without an error. If you encounter an error partway through then you can return that error and the SDK will automatically also return the partially-updated value encapsulated inside the ResourceData object.
If you return an accurate description of which of the blocks were created and exclude the ones that weren't then on the next plan the SDK logic should notice that some of the blocks declared in the configuration aren't present in the prior state and so it should propose to update the object to include those additional blocks. Your provider's Update function can then follow a similar principle as above to gradually append only the nested objects it successfully created, so that it'll once again return a complete set if successful or a partial set in case of any errors.
SDKv2 was optimized for the more common case where a single resource block represents a single remote API call, and so its default behavior deals with either fully-successful or fully-failed responses. Dealing with partial failure requires more subtlety that is difficult to represent in that SDK's API.
The newer Terraform Plugin Framework has a different design for these operations which separates the request data from the response data, thereby making it less confusing to return only a partial result. The Resource interface has a Create method which has a request object containing the config and the plan and a response object containing a representation of the final state.
It pre-populates the response state with the planned values similarly to SDKv2 to still handle that common case of entirely-failing vs. entirely-succeeding, but it does also allow totally overwriting that default with a locally-constructed object representing a partial result, to better support situations like yours where one resource in Terraform is representing a number of different fallible calls to the underlying API.
I have a use case where a mobile app records a long series of commands. Each command is a short, single word (or number). They can happen quickly one right after the other, but the use case does not care if it takes several seconds to get results back from the Cognitive server. It is currently being implemented as discrete asynchronous requests rather than streaming (seems to be more reliable for us).
Since results are coming back async, I see no easy way to map the result back to its corresponding request (and ultimately the app command). Can I embed a unique ID somewhere that will get passed back to me? Is there some other option?
You are using the SDK?
If you do recognizeOnce you get the result from the audio as a call result (synchronous)
If you do continuousrecognition there is currently no way to tag the audio segment.
I am new to node-red and I am confused with the "message payload flow system".
I want to send a POST request that contains, among other params, files into the request payload. These files should be in an array called "files".
I read my files from my file system, this works fine, but in the function node, how do I build my POST payload?
So far I have this:
The problem is that the payload contains both files and I can't find a way to get them separately. How can I retrieve both my files, separately, into the BUILD-POST-REQ function?
The core Join node can be used to combine the output of parallel input streams. It has a number of modes that control how many input messages it will collect together.
These include a count of messages.
You can also choose how it combines the input messages, this can be as an array or as an object using the msg.topic as the key to the incoming msg.payload
Ok. I found A solution. But I don't know if this is a best practice. Feel free to correct me!
The idea is that after each file read, I store it in a new property of the msg object and then can access it later in the flow.
A seemingly simple question, but I am unsure of the node.js equivalent to what I'm used to (say from Python, or LAMP), and I actually think there may not be one.
Problem statement: I want to use basic, simple logging in my express app. Maybe I want to output DEBUG messages, or INFO messages, or just some stats to the log for consumption by other back-end systems later.
1) I want all logs message, however, to contain some fields: remote-ip and request url, for example.
2) On the other hand, code that logs is everywhere in my app, including deep inside the call tree.
3) I don't want to pass (req,res) down into every node in the call tree (this just creates a lot of parameter passing where they are mostly not needed, and complicates my code, as I need to pass these into async callbacks and timeouts etc.)
In other systems, where there is a thread per request, I will store the (req,res) pair (where all the data I need is) in a thread-local-storage, and the logger will read this and format the message.
In node, there is only one thread. What is my alternative here? What's "the request context in which a specific piece of code is running under"?
The only way I can think of achieving something like this is by looking at a trace, and using reflection to look at local variables up the call tree. I hate that, plus would need to implement this for all callbacks, setTimeouts, setIntervals, new Function()'s, eval's, ... and the list goes on.
What are other people doing?