function swapExactTokensForTokens - amountOutMin - node.js

I have assigned 'uint amountOutMin' as 1. But when I swap 0.1BNB to btc, amoutOutMin will be less than 1. Will this transaction revert (since amountOutMin is <1). I belive we cannot set it to a value like 0.0001
function swapExactTokensForTokens(
uint amountIn,
uint amountOutMin,
address[] calldata path,
address to,
uint deadline
) external returns (uint[] memory amounts);
I am curious because I am getting the error message,
Error: cannot estimate gas; transaction may fail or may require manual gas limit
reason="execution reverted: TransferHelper: TRANSFER_FROM_FAILED"
I have approved the amounts for the router before hand, and the first transaction always goes through, and the subsequent transactions fails with the error message. The function is as follows
function swap (address _router, address _tokenIn, address _tokenOut, uint _amount) private onlyOwner {
IERC20(_tokenIn).approve(_router,_amount); // Approve router to spend.
address[] memory path;
path = new address[](2);
path[0] = _tokenIn;
path[1] = _tokenOut;
uint deadline = block.timestamp+300;
IUniswapV2Router02(_router).swapExactTokensForTokens(_amount,1,path, address(this), deadline);
}
I am trying to debug the error, and want to eleminate this as a potential culprit.

Related

How to write a test (using artifact file), needing a DEX router as an input

My necessity is to write a test for the following smart contract method (DeFi) in Hardhat using chai. It searches for a profitable token swap and returns the tokens. I want to use the artifact file to the test, before deploying the smart contract to the mainnet or testnet. One main reason is I cannot use HH console.log for debugging any errors in the contract.
My question is how can I simulate a router when writing the test in HH. Is forking an option?
Any reading material or examples. I am relatively new for programming and cannot find any material related to this issue.
Code is as follows:
function search(address _router, address _asset,
uint _amount) external view returns (uint,address,address,address) {
uint amtBack;
address token1;
address token2;
address token3;
for(uint i1=0;i1<tokens.length;i1++) {
for(uint i2=0;i2<stables.length;i2++) {
for(uint i3=0;i3<tokens.length;i3++) {
amtBack = amount(_router,_asset,tokens[i1],_amount);
amtBack = amount(_router,tokens[i1],stables[i2],amtBack);
amtBack = amount(_router,stables[i2], tokens[i3],amtBack);
amtBack = amount(_router,tokens[i3], _asset,amtBack);
if(amtBack>_amount){
token1 = tokens[i1];
token2 = tokens[i2]; /
token3 = tokens[i3];
break;
}
}
}
}
return (amtBack,token1,token2,token3);
}

Transactions fail with "execution reverted: TransferHelper: TRANSFER_FROM_FAILED"

My bot makes swaps using UniswapV2Router from basetoken->token then token-> basetoken, from a random list using the following method.
const tx = await swapper.connect(owner).duelDexTrade(router1,router2,baseToken,token,amount)
await tx.wait()
console.log(tx)
I approve amounts for the router(spookyswap or spiritswap) the amouts before each swap.
function swap (address _router, address _tokenIn, address _tokenOut, uint _amount) private onlyOwner {
IERC20(_tokenIn).approve(_router,_amount); // Approve router to spend.
address[] memory path;
path = new address[](2);
path[0] = _tokenIn;
path[1] = _tokenOut;
uint deadline = block.timestamp+300;
IUniswapV2Router02(_router).swapExactTokensForTokens(_amount,1,path, address(this), deadline);
}
When I run the console, the first transaction (irrespective of the pair) always goes through, and the following transactions all fail with the error:
Error: cannot estimate gas; transaction may fail or may require manual gas limit
reason="execution reverted: TransferHelper: TRANSFER_FROM_FAILED"
When I stop the console and start again, the first transaction goes through followed by the above error. Every time same result. I am using hh and ethersjs. Any help is welcome.

Durable function and CPU resource > 80 %

I am running on an Azure Consumption Plan. And notice high CPU usage with what I understand as a simple task. I would like to know if my approach is correct. At times CPU was above 80%
I have created a scheduler function, which executes once a minute and checks a SQL db for IoT devices that needs to control at specific times. If the device needs to be controlled, and durable function is executed. This durable function simply sends the device a message and wait for the reply before sending another request.
What I am doing is simply polling the durable function and then sleeping or delaying the function for x seconds as shown below:
[FunctionName("Irrimer_HttpStart")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
[OrchestrationClient]DurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
log.LogInformation($"Started Timmer Irr");
dynamic eventData = await req.Content.ReadAsAsync<object>();
string ZoneNumber = eventData.ZoneNumber;
string instanceId = await starter.StartNewAsync("Irrtimer", ZoneNumber);
return starter.CreateCheckStatusResponse(req, instanceId);
}
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context, ILogger log)
{
log.LogInformation($"Time Control Started--->");
Iot_data_state iotstatedata = new Iot_data_state();
iotstatedata.NextState = "int_zone";
var outputs = new List<string>();
outputs.Add("Stating Durable");
iotstatedata.zonenumber = context.GetInput<string>();
iotstatedata.termination_counter = 0;
while (iotstatedata.NextState != "finished")
{
iotstatedata = await context.CallActivityAsync<Iot_data_state>("timer_irr_process", iotstatedata);
outputs.Add(iotstatedata.NextState + " " + iotstatedata.now_time);
if (iotstatedata.sleepduration > 0)
{
DateTime deadline = context.CurrentUtcDateTime.Add(TimeSpan.FromSeconds(iotstatedata.sleepduration));
await context.CreateTimer(deadline, CancellationToken.None);
}
}
return outputs;
}
I have another function "timer_irr_process", which is simply a case statement, which performs and queries as required. Then sets the delay in seconds before it needs to be invoked again. When it gets to the case "Finish", this means the durable function is no longer needed and it exists.
The kind of tasks i am trying to create efficiently is sending a message to IoT device to switch On, observe it performance incase its been controlled manually or various other things. If it malfunctions send a message to a user, if it preforms correctly and task is finished , close the durable function.
Is the any efficient away to doing this?

In ETW, how to enable ProcessRundown events for Microsoft-Windows-Kernel-Process?

Provider's manifest indicates that it can send Microsoft-Windows-Kernel-Process::ProcessRundown::Info events, which I'd really like to have: they give a summary of processes that existed at the time the trace has started.
For reference, in the "usual" process provider enabled by EVENT_TRACE_FLAG_PROCESS, rundown is sent automatically via MSNT_SystemTrace::Process::DCStart events. However, data fields in that provider does not allow to find the process's image path: ImageFileName field is an ANSI filename without a path, and CommandLine field is also unreliable, because it could contain relative path (in worst case, no path at all). For this reason, I need Microsoft-Windows-Kernel-Process provider.
After quite a lot of trying, I found a very simple way: after the provider is enabled with EnableTraceEx2(EVENT_CONTROL_CODE_ENABLE_PROVIDER), an additional EnableTraceEx2(EVENT_CONTROL_CODE_CAPTURE_STATE) will send the events.
Eventually, I enable provider this way:
namespace Microsoft_Windows_Kernel_Process
{
struct __declspec(uuid("{22FB2CD6-0E7B-422B-A0C7-2FAD1FD0E716}")) GUID_STRUCT;
static const auto GUID = __uuidof(GUID_STRUCT);
enum class Keyword : u64
{
WINEVENT_KEYWORD_PROCESS = 0x10,
WINEVENT_KEYWORD_THREAD = 0x20,
WINEVENT_KEYWORD_IMAGE = 0x40,
WINEVENT_KEYWORD_CPU_PRIORITY = 0x80,
WINEVENT_KEYWORD_OTHER_PRIORITY = 0x100,
WINEVENT_KEYWORD_PROCESS_FREEZE = 0x200,
Microsoft_Windows_Kernel_Process_Analytic = 0x8000000000000000,
};
}
///////////////////////////////////
const u64 matchAnyKeyword =
(u64)Microsoft_Windows_Kernel_Process::Keyword::WINEVENT_KEYWORD_PROCESS;
const ULONG status = EnableTraceEx2(
m_SessionHandle,
&Microsoft_Windows_Kernel_Process::GUID,
EVENT_CONTROL_CODE_ENABLE_PROVIDER,
TRACE_LEVEL_VERBOSE,
matchAnyKeyword, // Filter events to specific keyword
0, // No 'MatchAllKeyword' mask
INFINITE, // Synchronous operation
nullptr // The trace parameters used to enable the provider
);
ENSURE_OR_CRASH(ERROR_SUCCESS == status);
And request rundown like this
const ULONG status = EnableTraceEx2(
m_SessionHandle,
&Microsoft_Windows_Kernel_Process::GUID,
EVENT_CONTROL_CODE_CAPTURE_STATE, // Request 'ProcessRundown' events
TRACE_LEVEL_NONE, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
0, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
0, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
INFINITE, // Synchronous operation
nullptr // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
);
ENSURE_OR_CRASH(ERROR_SUCCESS == status);

StackExchange.Redis on Azure is throwing timeout performing get and no connection available exceptions

I recently switched an MVC application that serves data feeds and dynamically generated images (6k rpm throughput) from the v3.9.67 ServiceStack.Redis client to the latest StackExchange.Redis client (v1.0.450) and I'm seeing some slower performance and some new exceptions.
Our Redis instance is S4 level (13GB), CPU shows a fairly constant 45% or so and network bandwidth appears fairly low. I'm not entirely sure how to interpret the gets/sets graph in our Azure portal, but it shows us around 1M gets and 100k sets (appears that this may be in 5 minute increments).
The client library switch was straightforward and we are still using the v3.9 ServiceStack JSON serializer so that the client lib was the only piece changing.
Our external monitoring with New Relic shows clearly that our average response time increases from about 200ms to about 280ms between ServiceStack and StackExchange libraries (StackExchange being slower) with no other change.
We recorded a number of exceptions with messages along the lines of:
Timeout performing GET feed-channels:ag177kxj_egeo-_nek0cew, inst: 12, mgr: Inactive, queue: 30, qu=0, qs=30, qc=0, wr=0/0, in=0/0
I understand this to mean that there are a number of commands in the queue that have been sent but no response available from Redis, and that this can be caused by long running commands that exceed the timeout. These errors appeared for a period when our sql database behind one of our data services was getting backed up, so perhaps that was the cause? After scaling out that database to reduce load we haven't seen very many more of this error, but the DB query should be happening in .Net and I don't see how that would hold up a redis command or connection.
We also recorded a large batch of errors this morning over a short period (couple of minutes) with messages like:
No connection is available to service this operation: SETEX feed-channels:vleggqikrugmxeprwhwc2a:last-retry
We were used to transient connection errors with the ServiceStack library, and those exception messages were usually like this:
Unable to Connect: sPort: 63980
I'm under the impression that SE.Redis should be retrying connections and commands in the background for me. Do I still need to be wrapping our calls through SE.Redis in a retry policy of my own? Perhaps different timeout values would be more appropriate (though I'm not sure what values to use)?
Our redis connection string sets these parameters: abortConnect=false,syncTimeout=2000,ssl=true. We use a singleton instance of ConnectionMultiplexer and transient instances of IDatabase.
The vast majority of our Redis use goes through a Cache class, and the important bits of the implementation are below, in case we're doing something silly that's causing us problems.
Our keys are generally 10-30 or so character strings. Values are largely scalar or reasonably small serialized object sets (hundred bytes to a few kB generally), though we do also store jpg images in the cache so a large chunk of the data is from a couple hundred kB to a couple MB.
Perhaps I should be using different multiplexers for small and large values, probably with longer timeouts for larger values? Or couple/few multiplexers in case one is stalled?
public class Cache : ICache
{
private readonly IDatabase _redis;
public Cache(IDatabase redis)
{
_redis = redis;
}
// storing this placeholder value allows us to distinguish between a stored null and a non-existent key
// while only making a single call to redis. see Exists method.
static readonly string NULL_PLACEHOLDER = "$NULL_VALUE$";
// this is a dictionary of https://github.com/StephenCleary/AsyncEx/wiki/AsyncLock
private static readonly ILockCache _locks = new LockCache();
public T GetOrSet<T>(string key, TimeSpan cacheDuration, Func<T> refresh) {
T val;
if (!Exists(key, out val)) {
using (_locks[key].Lock()) {
if (!Exists(key, out val)) {
val = refresh();
Set(key, val, cacheDuration);
}
}
}
return val;
}
private bool Exists<T>(string key, out T value) {
value = default(T);
var redisValue = _redis.StringGet(key);
if (redisValue.IsNull)
return false;
if (redisValue == NULL_PLACEHOLDER)
return true;
value = typeof(T) == typeof(byte[])
? (T)(object)(byte[])redisValue
: JsonSerializer.DeserializeFromString<T>(redisValue);
return true;
}
public void Set<T>(string key, T value, TimeSpan cacheDuration)
{
if (value.IsDefaultForType())
_redis.StringSet(key, NULL_PLACEHOLDER, cacheDuration);
else if (typeof (T) == typeof (byte[]))
_redis.StringSet(key, (byte[])(object)value, cacheDuration);
else
_redis.StringSet(key, JsonSerializer.SerializeToString(value), cacheDuration);
}
public async Task<T> GetOrSetAsync<T>(string key, Func<T, TimeSpan> getSoftExpire, TimeSpan additionalHardExpire, TimeSpan retryInterval, Func<Task<T>> refreshAsync) {
var softExpireKey = key + ":soft-expire";
var lastRetryKey = key + ":last-retry";
T val;
if (ShouldReturnNow(key, softExpireKey, lastRetryKey, retryInterval, out val))
return val;
using (await _locks[key].LockAsync()) {
if (ShouldReturnNow(key, softExpireKey, lastRetryKey, retryInterval, out val))
return val;
Set(lastRetryKey, DateTime.UtcNow, additionalHardExpire);
try {
var newVal = await refreshAsync();
var softExpire = getSoftExpire(newVal);
var hardExpire = softExpire + additionalHardExpire;
if (softExpire > TimeSpan.Zero) {
Set(key, newVal, hardExpire);
Set(softExpireKey, DateTime.UtcNow + softExpire, hardExpire);
}
val = newVal;
}
catch (Exception ex) {
if (val == null)
throw;
}
}
return val;
}
private bool ShouldReturnNow<T>(string valKey, string softExpireKey, string lastRetryKey, TimeSpan retryInterval, out T val) {
if (!Exists(valKey, out val))
return false;
var softExpireDate = Get<DateTime?>(softExpireKey);
if (softExpireDate == null)
return true;
// value is in the cache and not yet soft-expired
if (softExpireDate.Value >= DateTime.UtcNow)
return true;
var lastRetryDate = Get<DateTime?>(lastRetryKey);
// value is in the cache, it has soft-expired, but it's too soon to try again
if (lastRetryDate != null && DateTime.UtcNow - lastRetryDate.Value < retryInterval) {
return true;
}
return false;
}
}
A few recommendations.
- You can use different multiplexers with different timeout values for different types of keys/values
http://azure.microsoft.com/en-us/documentation/articles/cache-faq/
- Make sure you are not network bound on the client and server. if you are on the server then move to a higher SKU which has more bandwidth
Please read this post for more details
http://azure.microsoft.com/blog/2015/02/10/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/

Resources