Snapshots tests locally vs on Percy - jestjs

I have a problem with snapshot testing.
Tests I take locally are generating snapshots that look like they supposed to (i.e a table in the right place, popups, etc..). But then when Percy.io runs the same tests for some reason the snapshots looks different: one column and popups shifted a bit, In chrome and firefox.
What could cause this difference?

Related

I can't ignore existing files when using Delta Live Tables

I created a DLT pipeline targeting a terrabyte scale directory with file notifications option turned on. I set "cloudFiles.includeExistingFiles": false to ignore existing files, and ingest the data starting from the first run.
What I expect to happen is that on the first run (t0) no data is ingested, while on the second run (t1) the incoming data is ingested between t0 and t1. I also expect the first run to complete instantly, and since I am using file notifications, I expect the second run to complete pretty fast as well.
I started the first run, it's still running for the last 7 hours :) No data is ingested as I expected, but I have no idea what the pipeline is doing right now. I guess it does something with the existing files even though I explicitly stated that I want to ignore them.
Any ideas why the behavior I expected isn't happening?

Information Link Caching

I'm working through a problem that I can't seem to find a solution for. I'm attempting to speed up the load time for a report. The idea is to open the report on on the Analyst Client, and I've identified one information link that bogs down the load time. Easy enough, I figured i'd cache the information link:
I reloaded the report expecting the first load to take a while, however the data is reloading everything every time. The amount is less than 2 GB so that can't be the problem. The only other issue I can think of is that I am filtering the data in the Where clause in my SQL statement. Can you all think of anything I'm missing?

Azure Monitor Workbooks. Auto-update of hidden tiles

I have developed a Workbook merging data from couple of small Queries together. These queries (tiles) are hidden (conditionally visible: condition : 1=2). It seems to be working during development an manual run of the queries. But when I open the workbook another day in read mode the resulting query (merge) is not going to be started/finished (?). It shows as it would be running but it lasts endlessly till I run the source-queries manually, and then rerun the resulting query.
It looks like pretty standard configuration, why it doesn't work for me?
source-queires
merge query
In workbooks, queries will run if they are
a) visible
-or-
b) referenced by a merge that is visible
We currently do not support merges referencing hidden merges.
But fear not! A lot of people miss that a merge step can actually do many merges at once, so it is almost never need merges that reference another merge.

Options for running data extraction on a daily basis

I currently have an excel based data extraction method using power query and vba (for docs with passwords). Ideally this would be programmed to run once or twice a day.
My current solution involves setting up a spare laptop on the network that will run the extraction twice a day on its own. This works but I am keen to understand the other options. The task itself seems to be quite a struggle for our standard hardware. It is 6 network locations across 2 servers with around 30,000 rows and increasing.
Any suggestions would be greatly appreciated
Thanks
if you are going to work with increasing data, and you are going to dedicate a exclusive laptot for the process, i will think about install a database in the laptot (MySQL per example), you can use Access too... but Access file corruptions are a risk.
Download to this db all data you need for your report, based on incremental downloads (only new, modified and deleted info).
then run the Excel report extracting from this database in the same computer.
this should increase your solution performance.
probably your bigger problem can be that you query ALL data on each report generation.

Unloading Large Sql Anywhere Table

An old box has brought back live running Sql Anywhere 9.
I need to retrieve data from there to migrate to SQL Server and then I can kill the old box again
I ran an unload on 533 tables which all run ok. I have 1 table that does not unload.
I run the dbunload from the command-line and since it worked for 533 tables... in theory it should work.
I dont see specific what is wrong with this table. The unload runs but gives no errors (and also no file is written). I checked the event log and also no errors. I just wonder how to diagnose the problem to check what is wrong. One thing I do notice is that this the only table without a primary key, but I dont know if this matters with unload.
the production database of this table contains 50.000.000 entries. The archive version only contains 15 million entries but both versions refuse to be unloaded or better said: give no indication of what is wrong but simply dont generate an unloaded file.
I wonder if it is the size that is the problem, maybe memory / temp diskspace? or otherwise a time-out that is set somewhere.
I also wonder if there is an alternative.
p.s. the gui (sybase central) just produces an endless turning wheel that never finishes.
update: i saw that dbunload works temp in the user's local settings/temp directory. Possible that is why it fails or there is possible some other place where it temporary saves items.

Resources