My bot makes swaps using UniswapV2Router from basetoken->token then token-> basetoken, from a random list using the following method.
const tx = await swapper.connect(owner).duelDexTrade(router1,router2,baseToken,token,amount)
await tx.wait()
console.log(tx)
I approve amounts for the router(spookyswap or spiritswap) the amouts before each swap.
function swap (address _router, address _tokenIn, address _tokenOut, uint _amount) private onlyOwner {
IERC20(_tokenIn).approve(_router,_amount); // Approve router to spend.
address[] memory path;
path = new address[](2);
path[0] = _tokenIn;
path[1] = _tokenOut;
uint deadline = block.timestamp+300;
IUniswapV2Router02(_router).swapExactTokensForTokens(_amount,1,path, address(this), deadline);
}
When I run the console, the first transaction (irrespective of the pair) always goes through, and the following transactions all fail with the error:
Error: cannot estimate gas; transaction may fail or may require manual gas limit
reason="execution reverted: TransferHelper: TRANSFER_FROM_FAILED"
When I stop the console and start again, the first transaction goes through followed by the above error. Every time same result. I am using hh and ethersjs. Any help is welcome.
Related
I have assigned 'uint amountOutMin' as 1. But when I swap 0.1BNB to btc, amoutOutMin will be less than 1. Will this transaction revert (since amountOutMin is <1). I belive we cannot set it to a value like 0.0001
function swapExactTokensForTokens(
uint amountIn,
uint amountOutMin,
address[] calldata path,
address to,
uint deadline
) external returns (uint[] memory amounts);
I am curious because I am getting the error message,
Error: cannot estimate gas; transaction may fail or may require manual gas limit
reason="execution reverted: TransferHelper: TRANSFER_FROM_FAILED"
I have approved the amounts for the router before hand, and the first transaction always goes through, and the subsequent transactions fails with the error message. The function is as follows
function swap (address _router, address _tokenIn, address _tokenOut, uint _amount) private onlyOwner {
IERC20(_tokenIn).approve(_router,_amount); // Approve router to spend.
address[] memory path;
path = new address[](2);
path[0] = _tokenIn;
path[1] = _tokenOut;
uint deadline = block.timestamp+300;
IUniswapV2Router02(_router).swapExactTokensForTokens(_amount,1,path, address(this), deadline);
}
I am trying to debug the error, and want to eleminate this as a potential culprit.
We are using queue trigger based function app on premium plan where messages contains some details like azure subscriptions name. Based on which for each subscription we do many api calls specially to azure storage accounts(around 400 to 500). Since 'list' api call to storage account is limited to 100 call/5min, we get 429 response error on 101th call. To mitigate this we have applied exponential retry logic(tried both our own or polly library) which call after certain delay of time. This works for some subscription but fails for many where the retry logic does not try after first trying(we kept 3 retries with 60 sec delay). Even while monitoring the function app through live metrics we observed that sometimes cpu usage of some function instance goes to zero(although we do some operation like logging or use for loop in delay operation so that the function can be alive) which leads to killing of that particular function instance and pushing the message back to queue and start the process again with a fresh instance.
Note that since many subscription are processed in parallel, function app automatically scale up as required. Also since we are using premium plan one VM is always on state. So killing of any instance(which call around 400 to 500 storage api call for any particular subscription) is weird since in our delay the thread sleep time is only 10 sec for around 6,12,18(Time_delay) iteration. The below delay function is used in our retry logic code.
private void Delay(int Time_delay, string requestUri, int retryCount)
{
for (int i = 0; i < Time_delay; i++)
{
_logger.LogWarning($"Sleep initiated for id: {requestUri.ToString()}, RetryCount: {retryCount} CurrentTimeDelay: {Time_delay}");
Thread.Sleep(10000);
_logger.LogWarning($"Sleep completed for id: {requestUri.ToString()}, RetryCount: {retryCount} CurrentTimeDelay: {Time_delay}");
}
}
Note** Function app is not throwing any other exception other than dependency of 429 error response.
Would it be possible for you to requeue instead of using Thread.Sleep? You can use initial visibility delay when requeuing:
public class Function1
{
[FunctionName(nameof(TryDoWork))]
public static async Task TryDoWork(
[QueueTrigger("some-queue")] SomeItem item,
[Queue("some-queue")] CloudQueue queue)
{
var result = _SomeService.SomeWork(item);
if (result == 429)
{
item.Retries++;
var json = JsonConvert.SerializeObject(item);
var message = new CloudQueueMessage(json);
var delay = TimeSpan.FromSeconds(item.Retries);
await queue.AddMessageAsync(message, null, delay, null, null);
}
}
}
It might be that the sleeping is causing some wonky function app behavior. I think I remember reading some issues pertaining to the usage of Thread.Sleep, but I can't find it right now.
Also, you might want to add some sort of handling of messages that end up retrying more than 3 times (or however many you think is reasonable).
How to use "waiting for external" event functionality of durable task framework in the code. Following is a sample code.
context.ScheduleWithRetry<LicenseActivityResponse>(
typeof(LicensesCreatorActivity),
_retryOptions,
input);
I am using ScheduleWithRetry<> method of context for scheduling my task on DTF but when there is an exception occurring in the code. The above method retries for the _retryOptions number of times.
After completing the retries, the Orchestration status will be marked as Failed.
I need a process by which i can resume my orchestration on DTF after correcting the reason of exception.
I am looking into the githib code for the concerned method in the code but no success.
I have concluded two solution:
Call a framework's method (if exist) and re-queue the orchestration from the state where it failed.
Hold the orchestration code in try catch and in catch section i implement a method CreateOrchestrationInstanceWithRaisedEventAsync whcih will put the orchestration in hold state until an external event triggers it back. Whenever a user (using some front end application) will call the external event for resuming (which means the user have made the corrections which were causing exception).
These are my understandings, if one of the above is possible then kindly guide me through some technical suggestions. otherwise find me a correct path for this task.
For the community's benefit, Salman resolved the issue by doing the following:
"I solved the problem by creating a sub orchestration in case of an exception occurs while performing an activity. The sub orchestration lock the event on azure as pending state and wait for an external event which raise the locked event so that the parent orchestration resumes the process on activity. This process helps if our orchestrations is about to fail on azure durable task framework"
I have figured out the solution for my problem by using "Signal Orchestrations" taken from code from GitHub repository.
Following is the solution diagram for the problem.
In this diagram, before the solution implemented, we only had "Process Activity" which actually executes the activity.
Azure Storage Table is for storing the multiplier values of an instanceId and ActivityName. Why we implemented this will get clear later.
Monitoring Website is the platform from where a user can re-queue/retry the orchestration activity to perform.
Now we have a pre-step and a post-step.
1. Get Retry Option (Pre-Step)
This method basically set the value of RetryOptions instance value.
private RetryOptions ModifyMaxRetires(OrchestrationContext context, string activityName)
{
var failedInstance =
_azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId,
activityName);
var configuration = Container.GetInstance<IConfigurationManager>();
if (failedInstance.Result == null)
{
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries);
}
var multiplier = ((FailedOrchestrationEntity)failedInstance.Result).Multiplier;
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries * multiplier);
}
If we have any entry in our azure storage table against the instanceId and ActivityName, we takes the multiplier value from the table and updates the value of retry number in RetryOption instance creation. otherwise we are using the default number of retry value which is coming from our config.
Then:
We process the activity with scheduled retry number (if activity fails in any case).
2. Handle Exceptions (Post-Step)
This method basically handles the exception in case of the activity fails to complete even after the number of retry count set for the activity in RetryOption instance.
private async Task HandleExceptionForSignal(OrchestrationContext context, Exception exception, string activityName)
{
var failedInstance = _azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId, activityName);
if (failedInstance.Result != null)
{
_azureStorageFailedOrchestrationTasks.UpdateSingleEntity(context.OrchestrationInstance.InstanceId, activityName, ((FailedOrchestrationEntity)failedInstance.Result).Multiplier + 1);
}
else
{
//const multiplier when first time exception occurs.
const int multiplier = 2;
_azureStorageFailedOrchestrationTasks.InsertActivity(new FailedOrchestrationEntity(context.OrchestrationInstance.InstanceId, activityName)
{
Multiplier = multiplier
});
}
var exceptionInput = new OrderExceptionContext
{
Exception = exception.ToString(),
Message = exception.Message
};
await context.CreateSubOrchestrationInstance<string>(typeof(ProcessFailedOrderOrchestration), $"{context.OrchestrationInstance.InstanceId}_{Guid.NewGuid()}", exceptionInput);
}
The above code first try to find the instanceID and ActivityName in azure storage. If it is not there then we simply add a new row in azure storage table for the InstanceId and ActivityName with the default multiplier value 2.
Later on we creates a new exception type instance for sending the exception message and details to sub-orchestration (which will be shown on monitoring website to a user). The sub-orchestration waits for the external event fired from a user against the InstanceId of the sub-orchestration.
Whenever it is fired from monitoring website, the sub-orchestration will end up and go back to start parent orchestration once again. But this time, when the Pre-Step activity will be called once again it will find the entry in azure storage table with a multiplier. Which means the retry options will get updated after multiplying it with default retry options.
So by this way, we can continue our orchestrations and prevent them from failing.
Following is the class of sub-orchestrations.
internal class ProcessFailedOrderOrchestration : TaskOrchestration<string, OrderExceptionContext>
{
private TaskCompletionSource<string> _resumeHandle;
public override async Task<string> RunTask(OrchestrationContext context, OrderExceptionContext input)
{
await WaitForSignal();
return "Completed";
}
private async Task<string> WaitForSignal()
{
_resumeHandle = new TaskCompletionSource<string>();
var data = await _resumeHandle.Task;
_resumeHandle = null;
return data;
}
public override void OnEvent(OrchestrationContext context, string name, string input)
{
_resumeHandle?.SetResult(input);
}
}
I'm currently trying to implement an ethereum Node Connection to my Typescript/ Node Project.
I'm connection to the "Infura" node server where i need to sign my transaction locally.
Well, anyway. I'm signing my transaction using the npm package "ethereumjs-tx" and everything looks great.
When i'm using "sendRawTransaction" from web3 my response is an tx-id which means my transaction should be allready in the Blockchain. Well... it isn't
My sign Transaction Function is below.
private signTransactionLocally(amountInWei: number, to: string, privateKey: string = <PRIVATE_KEY>, wallet: string = <MY_WALLET>) {
const pKeyBuffer = Buffer.from(privateKey, "hex");
const txParams = {
nonce: this.getNonce(true,wallet),
//gas: this.getGasPrice(true),
gasLimit: this.getGasLimit2(true),
to: to,
value: amountInWei,
data: '0x000000000000000000000000000000000000000000000000000000000000000000000000',
chainId: "0x1"
};
// console.log(JSON.stringify(txParams));
const tx = new this.ethereumTx(txParams);
tx.sign(pKeyBuffer);
return tx.serialize().toString("hex");
}
Used Functions in "signTransactionLocally" :
private getGasLimit2(hex: boolean = false) {
const latestGasLimit = this.web3.eth.getBlock("latest").gasLimit;
return hex ? this.toHex(latestGasLimit) : latestGasLimit;
}
private getNonce(hex:boolean = false, wallet: string = "0x60a22659E0939a061a7C9288265357f5d26Cf98a") {
return hex ? this.toHex(this.eth().getTransactionCount(wallet)) : this.eth().getTransactionCount(wallet);
}
Running my code looks like following:
this.dumpInformations();
const signedTransaction = this.signTransactionLocally(this.toHex((this.getMaxAmountToSend(false, "0x60a22659E0939a061a7C9288265357f5d26Cf98a") / 3 )), "0x38bc48f1d19fdf7c8094a4e40334250ce1c1dc66" );
console.log(signedTransaction);
this.web3.eth.sendRawTransaction("0x" + signedTransaction, function(err: any, res: any) {
if (err)
console.log(err);
else
console.log("transaction Done=>" + res);
});
since sendRawTransaction results in console log:
[Node] transaction Done=>0xc1520ebfe0a225e6971e81953221c60ac1bfcd528e2cc17080b3f9b357003e34
everything should be allright.
Has anybody had the same problem?
i Hope that someone could help me. Have a nice day!
After dealing with these issues countless times; i'm pretty sure you are sending a nonce too high.
In these cases, the node will still return you a transaction hash, but your transaction will remain in the nodes queue and not enter the memory pool or be propagated to other nodes, UNTIL, you fill in the nonce gaps.
You could try the following :
Use getTransactionCount(address, 'pending') - to include the txs that are int nodes queue & memory pool . But this method unreliable and won't deal with concurrent requests as the node needs time to assess the correct amount at any given time.
Keep your own counter, without relying on the node (unless you detect some serious errors).
For more serious projects, persist your counter / per address at the level of the database with locks to handle concurrency, making sure you give out correct nonces to each request.
Cheers
Using the Azure Search .net SDK, when you try to index documents you might get an exception IndexBatchException.
From the documentation here:
try
{
var batch = IndexBatch.Upload(documents);
indexClient.Documents.Index(batch);
}
catch (IndexBatchException e)
{
// Sometimes when your Search service is under load, indexing will fail for some of the documents in
// the batch. Depending on your application, you can take compensating actions like delaying and
// retrying. For this simple demo, we just log the failed document keys and continue.
Console.WriteLine(
"Failed to index some of the documents: {0}",
String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
}
How should e.FindFailedActionsToRetry be used to create a new batch to retry the indexing for failed actions?
I've created a function like this:
public void UploadDocuments<T>(SearchIndexClient searchIndexClient, IndexBatch<T> batch, int count) where T : class, IMyAppSearchDocument
{
try
{
searchIndexClient.Documents.Index(batch);
}
catch (IndexBatchException e)
{
if (count == 5) //we will try to index 5 times and give up if it still doesn't work.
{
throw new Exception("IndexBatchException: Indexing Failed for some documents.");
}
Thread.Sleep(5000); //we got an error, wait 5 seconds and try again (in case it's an intermitent or network issue
var retryBatch = e.FindFailedActionsToRetry<T>(batch, arg => arg.ToString());
UploadDocuments(searchIndexClient, retryBatch, count++);
}
}
But I think this part is wrong:
var retryBatch = e.FindFailedActionsToRetry<T>(batch, arg => arg.ToString());
The second parameter to FindFailedActionsToRetry, named keySelector, is a function that should return whatever property on your model type represents your document key. In your example, your model type is not known at compile time inside UploadDocuments, so you'll need to change UploadsDocuments to also take the keySelector parameter and pass it through to FindFailedActionsToRetry. The caller of UploadDocuments would need to specify a lambda specific to type T. For example, if T is the sample Hotel class from the sample code in this article, the lambda must be hotel => hotel.HotelId since HotelId is the property of Hotel that is used as the document key.
Incidentally, the wait inside your catch block should not wait a constant amount of time. If your search service is under heavy load, waiting for a constant delay won't really help to give it time to recover. Instead, we recommend exponentially backing off (e.g. -- the first delay is 2 seconds, then 4 seconds, then 8 seconds, then 16 seconds, up to some maximum).
I've taken Bruce's recommendations in his answer and comment and implemented it using Polly.
Exponential backoff up to one minute, after which it retries every other minute.
Retry as long as there is progress. Timeout after 5 requests without any progress.
IndexBatchException is also thrown for unknown documents. I chose to ignore such non-transient failures since they are likely indicative of requests which are no longer relevant (e.g., removed document in separate request).
int curActionCount = work.Actions.Count();
int noProgressCount = 0;
await Polly.Policy
.Handle<IndexBatchException>() // One or more of the actions has failed.
.WaitAndRetryForeverAsync(
// Exponential backoff (2s, 4s, 8s, 16s, ...) and constant delay after 1 minute.
retryAttempt => TimeSpan.FromSeconds( Math.Min( Math.Pow( 2, retryAttempt ), 60 ) ),
(ex, _) =>
{
var batchEx = ex as IndexBatchException;
work = batchEx.FindFailedActionsToRetry( work, d => d.Id );
// Verify whether any progress was made.
int remainingActionCount = work.Actions.Count();
if ( remainingActionCount == curActionCount ) ++noProgressCount;
curActionCount = remainingActionCount;
} )
.ExecuteAsync( async () =>
{
// Limit retries if no progress is made after multiple requests.
if ( noProgressCount > 5 )
{
throw new TimeoutException( "Updating Azure search index timed out." );
}
// Only retry if the error is transient (determined by FindFailedActionsToRetry).
// IndexBatchException is also thrown for unknown document IDs;
// consider them outdated requests and ignore.
if ( curActionCount > 0 )
{
await _search.Documents.IndexAsync( work );
}
} );