How to disable ANSI escape code output during a Rollup build? - vite

When doing a runtime build with vite it uses rollupjs to do the code transformation, which prints some interesting information in the terminal. While this is fine on a local dev box, it becomes a nuisance in a CI build (here Jenkins):
[36mvite v4.0.3 [32mbuilding for production...[36m[39m
transforming...
Missing export "createServer" has been shimmed in module "__vite-browser-external".
Missing export "Socket" has been shimmed in module "__vite-browser-external".
[32m✓[39m 3098 modules transformed.
rendering chunks...
computing gzip size...
[2mbuild/[22m[2massets/[22m[32meditor-side-line-straight-3b1dfbb4.svg [39m[1m[2m 0.41 kB[22m[1m[22m
[2mbuild/[22m[2massets/[22m[32msquiggle-cb9fe55e.svg [39m[1m[2m 0.47 kB[22m[1m[22m
[2mbuild/[22m[2massets/[22m[32mmarkdown-8b37fca1.svg [39m[1m[2m 0.58 kB[22m[1m[22m
[2mbuild/[22m[2massets/[22m[32mshell-40c17629.svg [39m[1m[2m 0.60 kB[22m[1m[22m
[2mbuild/[22m[2massets/[22m[32mremove-899a6ae3.svg [39m[1m[2m 0.65 kB[22m[1m[22m
Is it possible to disable the ANSI escape code in the output to get just the text? Ideally, this should be configurable (say, using an env variable, to know when doing a CI build).

Related

parsing SVG in webpack

I have created a project on Symfony 5. I am receiving an error in webpack when I run 'yarn build'. I am trying to fix it from few days but without success, so I decided to ask for some help :)
This is the error I am getting:
I have enabled postCssLoader in my webpack.config and created postss.config.js in my root directory
.enablePostCssLoader()
postss.config.js File
module.exports = {
plugins: [
require('autoprefixer'),
require('postcss-svgo'),
require('postcss-inline-svg'),
require('postcss-write-svg'),
]
}
And here is a sample of svg I am trying to write in my css
.custom-checkbox .custom-control-input:checked~.custom-control-label::after {
background-image: url('data:image/svg+xml,%3csvg xmlns=\'http://www.w3.org/2000/svg\' width=\'8\'
height=\'8\' viewBox=\'0 0 8 8\'%3e%3cpath fill=\'%23fff\' d=\'M6.564.75l-3.59 3.612-1.538-
1.55L0 4.26l2.974 2.99L8 2.193z\'/%3e%3c/svg%3e')
}
If the error transfers code verbatim, then there are two line breaks (and indentation) that makes the property invalid (see "CRLF": ..width=\'8\'CRLF height.. - this one you can backslash-escape in CSS, and ..1.538-CRLF 1.55L.. - this one with indentation separates numeral making path data invalid - you have to remove all whitespace between minus and digit). If this is it, simply removing line breaks (and suprefluous whitespace) should fix it:
background-image: url('data:image/svg+xml,%3csvg xmlns=\'http://www.w3.org/2000/svg\' width=\'8\' height=\'8\' viewBox=\'0 0 8 8\'%3e%3cpath fill=\'%23fff\' d=\'M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z\'/%3e%3c/svg%3e')
If code snippet you provided is not directly from your source code, then you have probably some formarrter breaking it in the process (?)
N.B. you don't usually have to escape SVG datauris so much, you could go with url("data:image/svg+xml,<svg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8' fill='%23fff'><path d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/></svg>") (i.e. the only escaped sequence is #->%23) and most interpreters should pick it up just fine. I'm not sure about your build stack, but I'd guess that "safe over-escaped format for obsolete IEs" could be produced as the build result; and if you use preprocessor you can embed 'dataurized' external resources, what could prevent such formatting accidents. (Ah, that's probably what the postcss-inline-svg is doing for you.)

Kernel Build Caching/Nondeterminism

I run a CI server which I use to build a custom linux kernel. The CI server is not powerful and has a time limit of 3h per build. To work within this limit, I had the idea to cache kernel builds using ccache. My hope was that I could create a cache once every minor version release and reuse it for the patch releases e.g. I have a cache I made for 4.18 which I want to use for all 4.18.x kernels.
After removing the build timestamps, this works great for the exact kernel version I am building for. For the 4.18 kernel referenced above, building that on the CI gives the following statistics:
$ ccache -s
cache directory
primary config
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 16 14:36:22 2018
cache hit (direct) 17812
cache hit (preprocessed) 38
cache miss 0
cache hit rate 100.00 %
called for link 3
called for preprocessing 29039
unsupported code directive 4
no input file 2207
cleanups performed 0
files in cache 53652
cache size 1.4 GB
max cache size 5.0 GB
Cache hit rate of 100% and an hour to complete the build, fantastic stats and as expected.
Unfortunately, when I try to build 4.18.1, I get
cache directory
primary config
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 16 10:36:22 2018
cache hit (direct) 0
cache hit (preprocessed) 233
cache miss 17658
cache hit rate 1.30 %
called for link 3
called for preprocessing 29039
unsupported code directive 4
no input file 2207
cleanups performed 0
files in cache 90418
cache size 2.4 GB
max cache size 5.0 GB
That's a 1.30% hit rate and the build time reflects this poor performance. That from only a single patch version change.
I would have expected the caching performance to degrade over time but not to this extent, so my only thought is that there is more non-determinism than simply the timestamp. For example, are most/all of the source files including the full kernel version string? My understanding is that something like that would break the caching completely. Is there a way to make the caching work as I'd like it to or is it impossible?
There is include/generated/uapi/linux/version.h header (generated in the top Makefile https://elixir.bootlin.com/linux/v4.16.18/source/Makefile)
which includes exact kernel version as macro:
version_h := include/generated/uapi/linux/version.h
old_version_h := include/linux/version.h
define filechk_version.h
(echo \#define LINUX_VERSION_CODE $(shell \
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))';)
endef
$(version_h): $(srctree)/Makefile FORCE
$(call filechk,version.h)
$(Q)rm -f $(old_version_h)
So, version.h for linux 4.16.18 will be generated like (266258 is (4 << 16) + (16 << 8) + 18 = 0x41012)
#define LINUX_VERSION_CODE 266258
#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))
Later, for example in module building there should be way to read LINUX_VERSION_CODE macro value https://www.tldp.org/LDP/lkmpg/2.4/html/lkmpg.html (4.1.6. Writing Modules for Multiple Kernel Versions)
The way to do this to compare the macro LINUX_VERSION_CODE to the macro KERNEL_VERSION. In version a.b.c of the kernel, the value of this macro would be 2^{16}a+2^{8}b+c. Be aware that this macro is not defined for kernel 2.0.35 and earlier, so if you want to write modules that support really old kernels
How version.h is included? The sample module includes <linux/kernel.h> <linux/module.h> and <linux/modversions.h>, and one of these files probably indirectly includes global version.h. And most or even all kernel sources will include version.h.
When your build timestamps were compared, version.h may be regenerated and disables ccache. When timestamps are ignored, LINUX_VERSION_CODE is same only for exactly same linux kernel version, and it is changed for next patchlevel.
Update: Check gcc -H output of some kernel object compilation, there will be another header with full kernel version macro definition. For example: include/generated/utsrelease.h (UTS_RELEASE macro), include/generated/autoconf.h (CONFIG_VERSION_SIGNATURE).
Or even do gcc -E preprocessing of same kernel object compilation between two patchlevels and compare the generated text. With simplest linux module I have -include ./include/linux/kconfig.h directly in gcc command line, and its includes include/generated/autoconf.h (but this is not visible in -H output, is it bug or feature of gcc?).
https://patchwork.kernel.org/patch/9326051/
... because the top Makefile forces to include it with:
-include $(srctree)/include/linux/kconfig.h
It actually does: https://elixir.bootlin.com/linux/v4.16.18/source/Makefile
# Use USERINCLUDE when you must reference the UAPI directories only.
USERINCLUDE := \
-I$(srctree)/arch/$(SRCARCH)/include/uapi \
-I$(objtree)/arch/$(SRCARCH)/include/generated/uapi \
-I$(srctree)/include/uapi \
-I$(objtree)/include/generated/uapi \
-include $(srctree)/include/linux/kconfig.h
# Use LINUXINCLUDE when you must reference the include/ directory.
# Needed to be compatible with the O= option
LINUXINCLUDE := \
-I$(srctree)/arch/$(SRCARCH)/include \
-I$(objtree)/arch/$(SRCARCH)/include/generated \
$(if $(KBUILD_SRC), -I$(srctree)/include) \
-I$(objtree)/include \
$(USERINCLUDE)
LINUXINCLUDE is exported to env and used in source/scripts/Makefile.lib to define compiler flags https://elixir.bootlin.com/linux/v4.16.18/source/scripts/Makefile.lib
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)

Why do OpenGL-based VTK targets in drake executed via `bazel test` sometimes fail on Linux?

While a binary works with bazel run, when I run a test using bazel test, such as:
$ bazel test //systems/sensors:rgbd_camera_test
I encounter a slew of errors from VTK / OpenGL:
ERROR: In /vtk/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 820
vtkXOpenGLRenderWindow (0x55880715b760): failed to create offscreen window
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 816
vtkXOpenGLRenderWindow (0x55880715b760): GLEW could not be initialized.
ERROR: In /vtk/Rendering/OpenGL2/vtkShaderProgram.cxx, line 453
vtkShaderProgram (0x5588071d5aa0): Shader object was not initialized, cannot attach it.
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 1858
vtkXOpenGLRenderWindow (0x55880715b760): Hardware does not support the number of textures defined.
May I ask why this happens?
(Note: This post is a means to migrate from http://drake.mit.edu/faq.html to StackOverflow for user-based questions.)
The best workaround at the moment is to first mark the test as as local in the BUILD.bazel file, either with local = 1, or tags = [.., "local"]. Doing so will make the specific target run without sandboxing, such that it has an environment similar to that of bazel run.
As an example, in systems/sensors/BUILD.bazel:
drake_cc_googletest(
name = "rgbd_camera_test",
# ...
local = 1,
# ...
)
If this does not work, then try running the test in Bazel without sandboxing:
$ bazel test --spawn_strategy=standalone //systems/sensors:rgbd_camera_test
Please note that you can possibly add --spawn_strategy=standalone to your ~/.bazelrc, but be aware that this means your development testing environment may deviate even more from other developer's testing environments.

Webpack Globalize fails build when set to production mode: No formatters or parsers provided

I'm working on a React/Webpack/Globalize app.
In development mode things are ok-ish (though Globalize insists on compiling all locales instead of the one i have selected but that's another question for another day).
However, when I'm setting production: true in my webpack config, I'm getting the following error when running npm run build
> webpack --config webpack.prod.config.js
/opt/app/ui/node_modules/globalize-webpack-plugin/GlobalizeCompilerHelper.js:72
throw e;
^
Error: No formatters or parsers has been provided
I was under the impression the globalize webpack plugin is meant to handle precompilation. Any idea why I'm seeing this error? When I'm setting production: false things compile fine.
My plugin setup is:
new GlobalizePlugin({
production: true,
developmentLocale: "en",
supportedLocales: [ "en"],
output: "i18n/[locale].[hash].js"
}),
When a file changes and webpack dev server rebuilds, I'm getting a LOT of these messages indicating recomplication of locales I am not using:
[461] ./~/cldr-data/main/es-PY/dateFields.json 15 kB {0} [optional]
[462] ./~/cldr-data/main/es-SV/dateFields.json 15 kB {0} [optional]
[463] ./~/cldr-data/main/es-US/dateFields.json 15 kB {0} [optional]
[464] ./~/cldr-data/main/es-UY/dateFields.json 15 kB {0} [optional]
[465] ./~/cldr-data/main/es-VE/dateFields.json 15 kB {0} [optional]
[466] ./~/cldr-data/main/es/dateFields.json 15 kB {0} [optional]
Nothing I try seems to get passed that problem.
Thanks
As it stands, the messages key is not 'optional', but actually required. More than that, somewhere you need to 'prime' (for lack of a better word) the message formatter by calling Globalize.formatMessage("somekey") (where somekey exists in your lang file). All this is required when production is set to true.
As well, if you do set production to true, the output path must match an existing path in your source tree. If for example your code builds into /assets, the output path should be assets/i18n/[locale].[hash].js. Otherwise the i18n directory will not be created on build.
All this is derived from a discussion in the github repo:
https://github.com/rxaviers/globalize-webpack-plugin/issues/10

Issue getting ANSICON working on Windows 7 Enterprise 64-bit

I have been trying to get 1.50 or 1.40 ANSICON (https://github.com/adoxa/ansicon) working and have looked at sooooo many pages telling about how to install this:
http://blog.mmediasys.com/2010/11/24/we-all-love-colors/
http://carol-nichols.com/2011/03/the-system-cannot-find-the-path-specified/
etc....
So, I have my AutoRun set to "C:\usr\bin\ansi140\x64\ansicon.exe" -p and I also testing 150 but there was zero change.
My entire team has this working with no issues but I cannot get this to work.. I still get the garbled junk on the command prompt:
Scenario: Residential caller chooses to hear payment locations closest to home and there are 3 locations available which are in a 25 miles radius.?[90m #
features\payment_locations.feature:5?[0m
?[32mGiven the call flow is '?[32m?[1mDivisional?[0m?[0m?[32m'?[90m
# features/step_definitions/common_steps.rb:5?[0m?[0m
?[32mAnd the ani is '?[32m?[1m6101234572?[0m?[0m?[32m'?[90m
# features/step_definitions/common_steps.rb:9?[0m?[0m
?[32mAnd the dnis is '?[32m?[1m9?[0m?[0m?[32m'?[90m
# features/step_definitions/common_steps.rb:13?[0m?[0m
?[31mWhen the call is started?[90m
# features/step_definitions/common_steps.rb:17?[0m?[0m
?[31m Connection refused - Connection refused (Errno::ECONNREFUSED)?[0m
?[31m org/jruby/ext/socket/RubyTCPSocket.java:121:in `initialize'?[0m
?[31m org/jruby/RubyIO.java:864:in `new'?[0m
?[31m org/jruby/ext/socket/RubyTCPSocket.java:147:in `open'?[0m
?[31m c:/usr/bin/jruby-1.6.4/lib/ruby/1.8/net/http.rb:560:in `connect'?[0m
?[31m org/jruby/ext/Timeout.java:79:in `timeout'?[0m
?[31m c:/usr/bin/jruby-1.6.4/lib/ruby/1.8/net/http.rb:560:in `connect'?[0m
?[31m c:/usr/bin/jruby-1.6.4/lib/ruby/1.8/net/http.rb:553:in `do_start'?[0m
?[31m c:/usr/bin/jruby-1.6.4/lib/ruby/1.8/net/http.rb:548:in `start'?[0m
?[31m org/jruby/RubyKernel.java:2100:in `send'?[0m
?[31m ./features/support/request_helper.rb:12:in `request'?[0m
?[31m ./features/support/request_helper.rb:4:in `get'?[0m
?[31m ./features/step_definitions/common_steps.rb:22:in `(root)':in `/^the call is started$/'?[0m
?[31m features\payment_locations.feature:9:in `When the call is started'?[0m
Can anyone PLEASE help me try to understand why I am having this issue when the rest of my team with the same laptops are not having this issue?
EDIT from first comment:
I tried what you asked and ende dup with this tab setting:
<tab title="ANSICON" icon="linux.ico" use_default_icon="0">
<console shell="C:\usr\bin\ansi150\x64\ansicon.exe" init_dir="C:\usr\git_workspaces\d2" run_as_user="0" user=""/>
<cursor style="0" r="255" g="255" b="255"/>
<background type="0" r="0" g="0" b="0">
<image file="" relative="0" extend="0" position="0">
<tint opacity="0" r="0" g="0" b="0"/>
</image>
</background>
</tab>
But the issue still persists
?[0m ?[36m <catch event="error">
?[0m ?[36m <submit next="/d2/exception/handleVoiceBrowserError.vxml" namelist="_event _message" />
?[0m ?[36m </catch>
?[0m ?[36m
?[36m</vxml>?[0m
?[32mThen play the payment locations?[90m
# features/step_definitions/billing_steps.rb:360?[0m?[0m
?[32mThen caller hangs up the phone?[90m
# features/step_definitions/goodbye_steps.rb:1?[0m?[0m
1 scenario (?[32m1 passed?[0m)
32 steps (?[32m32 passed?[0m)
0m10.302s
I also seem to have the term-ansicolor gem so this should be working
*** LOCAL GEMS ***
atoulme-Antwrap (0.7.1 java)
bouncy-castle-java (1.5.0146.1)
builder (2.1.2)
buildr (1.4.6 java)
buildr-xivr (0.0.6, 0.0.4)
bundler (1.0.20)
crack (0.1.8)
cucumber (1.0.0, 0.10.2)
diff-lcs (1.1.2)
gherkin (2.4.21 java, 2.4.16 java, 2.3.8 java)
highline (1.5.1)
hoe (2.3.3)
hpricot (0.8.3 java)
httparty (0.7.8, 0.7.7)
jruby-openssl (0.7.5, 0.7.4)
jruby-win32ole (0.8.5)
json (1.6.5 java, 1.5.4 java, 1.5.1 java)
json_pure (1.4.3)
mechanize (1.0.0)
minitar (0.5.3)
net-scp (1.0.4)
net-sftp (2.0.4)
net-ssh (2.0.23)
nokogiri (1.5.0 java, 1.5.0.beta.4 java)
rake (0.8.7)
rspec (2.1.0, 1.3.2)
rspec-core (2.1.0)
rspec-expectations (2.1.0)
rspec-mocks (2.1.0)
rubyforge (2.0.3)
rubygems-update (1.8.10)
rubyzip (0.9.4)
sources (0.0.1)
term-ansicolor (1.0.7, 1.0.6, 1.0.5)
xml-simple (1.0.12)
but it is not. Neither on Cygwin nor CMD.
The plot thickens
Using ansicon worked. Additionally, you may want to check out Console2 for an excellent multi-tabbed console.
Extract ansi152/x64 into <console-install-dir>, say C:\Apps\Console
Configure console to run with different shells, ie. powershell, cmd, gitbash
Run ansicon -i from <console-install-dir>, in console.
PS. You may need to add console-install-dir into your $env:path.
To fix ansicon installation,
Grab Console2, extract, to a folder, mine C:\Applications\.
Extract files from ansi150.zip\x64, use 64-bit binaries, place it in a same folder with Console2.
Open C:\Applications\Console2\Console.exe.
From Console2 menu open File > Edit > Settings > Tabs, fill in Shell with C:\Applications\Console2\ansicon.exe, or browse to it. Click Ok.
To apply changes reopen a Console2 tab.
My setup is Console-2.00b148-Beta_64bit.zip and ansi150.zip on Win 7 64-bit (without editing AutoRun registry).
I had this problem myself and I am finally seeing colored output.
I followed these steps:
Download "https://github.com/downloads/adoxa/ansicon/ansi150.zip"
Copy the files under the "x64" directory to somewhere in your path.
For example, you can copy them to "c:\windows\system32".
Download "https://github.com/downloads/adoxa/ansicon/ansi6432.zip"
Copy the files under the "x64" directory to the same location you used in step #2.
This should overwrite ANSI32.DLL and ansicon.exe
Install ansicon by typing "ansicon -I" at a command prompt
You should now see colored output.
Make sure to enable logging by setting the environment variable ANSICON_LOG:
set ANSICON_LOG=3
This should log output to %TEMP%\ansicon.log (Usually "c:\temp\ansicon.log")
I just found out that we need to set this:
set ANSICON_EXC=nvd3d9wrap.dll
(add it to a ansicon.bat file, or set an environment variable). Works like a champ with win64x pro, ansicon164

Resources