Extending Apache Superset and adding a link - python-3.x

I have a superset fork I managed to get it up and running using docker compose. Now I'm trying to do one simple thing: add a new link to the existing SQL Lab menu.
What I'm trying to do is use appbuilder.add_link under the init_views method of the SupersetAppInitializer class (app.py):
class SupersetAppInitializer:
# ...
def init_views(self) -> None:
# ...
appbuilder.add_link(
__("Saved Queries"),
href="/savedqueryview/list/",
icon="fa-save",
category="SQL Lab",
)
# This is a new link
appbuilder.add_link(
__("Test"),
href="/savedqueryview/list/",
icon="fa-save",
category="SQL Lab",
)
I copied the existing 'Saved Queries' link and added a 'Test' link (using the same href). It doesn't seem to be working.
What do I need to make it work?

I could reproduce the issue by editing the changes to the app menu in vim.
The way Apache superset builds the menu is by requesting data from the backend.
When editing superset/app.py in vim, changes to it are not properly updated to file mounts in the running container.
You could configure your editor to perform edits in place.
Another option is to restart the container services to rebind the file mounts.

Related

In a Kotlin multi-platform (or JS) project, (how) can one pass custom command line arguments to Node.js?

I'm working on a Kotlin multi-platform project, and I need my JS tests to run on Node.js but with custom command line arguments (specifically I need node to run with the --expose-gc flag, because some tests need to trigger garbage collection).
Looking at the documentation for the Gradle Kotlin JS DSL I didn't find any mention of how to do that; does anyone know whether it's at all possible and how?
Unfortunately can not answer your question directly, but there is some suggestion to help you with reverse engineering.
Let's start from some example. We have Gradle tasks to run our project using webpack's dev server such as browserDevelopmentRun, browserProductionRun (not sure if multi-platform projects have it, but JS projects do). We can add:
println(tasks.named("browserProductionRun").get().javaClass)
to build.gradle.kts to find out the exact class used for this task. When we sync Gradle, it outputs:
org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack_Decorated
Now we know the exact class of this task so we can investigate its API. The auto completion or navigating inside of the KotlinWebpack class helps us to find out that it has a helpful nodeArgs property to configure NodeJS arguments for it, so we can set them, for example:
tasks.named("browserProductionRun", org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack::class).get().nodeArgs.add("--trace-deprecation")
Getting back to your question.
In your case I guess you need to investigate the browserTest task. Let's get some info about it by adding:
println(tasks.named("browserTest").get().javaClass)
to build.gradle.kts - a-ha - it seems to be of the org.jetbrains.kotlin.gradle.targets.js.testing.KotlinJsTest_Decorated type. Let's check what's inside. Open KotlinJsTest.kt somehow - for example by typing its name into the window being opened by CMD + Shift + O (make sure to select "All Places" here) or just by typing its name somewhere in build.gradle.kts and navigating inside it.
The only interesting thing I see inside this open class is the following block:
override fun createTestExecutionSpec(): TCServiceMessagesTestExecutionSpec {
val forkOptions = DefaultProcessForkOptions(fileResolver)
forkOptions.workingDir = npmProject.dir
forkOptions.executable = nodeJs.requireConfigured().nodeExecutable
val nodeJsArgs = mutableListOf<String>()
return testFramework!!.createTestExecutionSpec(
task = this,
forkOptions = forkOptions,
nodeJsArgs = nodeJsArgs,
debug = debug
)
}
So maybe it can work out to create your own extension of this class, override its createTestExecutionSpec method and provide nodeJsArgs as you need inside it. After that you'll be needing to declare another Gradle task to launch tests inside build.gradle.kts which will use this new extended class.

How to create an example extension for Hybris 2011 version

I just download the latest 2011.1, and try to use "ant extgen" command to create a default extennsion, but meet following error:
Would anyone know how to deal with it?
extgen.xml:293: The following error occurred while executing this line:
extgen.xml:35: Source directory '${ext.develop.path}' for template 'training' does not exist.
Just run it again and it should work the second time.
There does seem to be a bug in the build scripts that has probably been there a while. I assume that ant extgen was the first thing you ran after unpacking. There is no config folder so the build script did this:
[input] No config folder was found at /path/to/hybris/config.
[input] Please choose the configuration template.
[input] Press [Enter] to use the default value ([develop], production)
and you chose develop
Unfortunately it stores your choice in a variable input.template which is the same name as used when later on the script asks you what extension template you want to base yours on. So the script sees that the variable already has a value and doesn't ask you:
[input] Please choose a template for generation.
[input] Press [Enter] to use the default value (commercewebservices, commercewebservicestests, yacceleratorfulfilmentprocess, yacceleratormarketplaceintegration, yacceleratorordermanagement, yacceleratorstorefront, yaddon, ybackoffice, ycommercewebservices, ycommercewebservicestest, ydocumentcart, [yempty], yhacext, yocc, yoccaddon, yocctests, ysapproductconfigaddon, ysmarteditmodule, yvoid, ywebservices)
It then tries to find a template extension develop and fails.
Running it the second time means your config folder is already generated and it correctly asks you which extension you want to base your extension on.

Chef 12 Opsworks attributes get overriden by last recipe in run list

I am running a Chef 12 Opsworks stack and I created a custom cookbook to take some backups of a few folders on my webserver.
Maybe my approach to this is wrong, but I basically have two (or more) recipes for each website I want backed up and then I'm updating some attributes (site name, backup folder, etc) in each recipe.
So I start with the following in my default.rb file in the attributes folder:
default['backup']['site'] = "SITE1"
default['backup']['root'] = "/var/www/SITE1"
Then in each backup recipe I have the following at the top of the recipe followed by my back up code:
site1.rb
node.override['backup']['site'] = "SITE1"
node.override['backup']['root'] = "/var/www/SITE1"
site2.rb
node.override['backup']['site'] = "SITE2"
node.override['backup']['root'] = "/var/www/SITE2"
Now in my Setup step on the Opsworks Layer I add all the backup recipes, but the problem arises when I start an instance (or run the Setup step from the Deployments) because the attributes seem to be set to whatever the last recipe sets them in alphabetical order.
So SITE1 backup script for example will end up being built with the /var/www/SITE2 root folder in its config and thus not backing up the right site.
Is there are way to prevent this from happening? From what I gather (from my example and reading Chef docs) the attributes are all compiled together at the beginning and then the recipes are being run - which is why the last set of attributes gets set as the final version and then all recipes using those attributes will get those values.
The only way I can deploy them at the moment is by running each recipe independently thus using the correct attribute values, but the moment the instance gets rebooted or the Setup step is ran manually all backup scripts will go back to backing up just one site.
Is my approach to this wrong? Should I be creating separate named attributes for each recipe?
I ended up fixing this by slightly modifying my approach to using attributes upon reading more about the 'chef run' process.
I am still using attributes for the actual 'common' attributes between all recipes, but then in each recipe instead of overriding the recipe specific attributes I replaced them with local ruby variables which then get passed to my 'template' via the 'variables' property.
So my site1.rb recipe for example looks the following way:
site = "SITE1"
root = "/var/www/SITE1"
...
config = {
:site => site,
:root => root,
...
}
...
template "/path/to/config" do
source "config.erb"
variables(
:config => config
)
...
end
This way I get to keep the same variable names inside my config template yet have each backup recipe use its own custom variables without interfering with other recipes.

Code Behind RStudio Server Export Function

I am currently using RStudio-server on Linux redhat. One nice feature of RStudio-server is that I can export from the server to my Windows desktop. Does anyone know the code behind the export drop-down?
The export function can be found via the Files tab:
(More >> Export...)
I would like use code to automate the exporting of objects. I figured I should be able to perform this export using the system function, but I am having trouble.
Thanks for any help.
I think this post might help you,
Spacedman explains that you can trigger the export by the use of the R function "browseURL", with the URL parameter replaced by the ftp path to the file.
If you absolutely want to trigger this export with a system command, perhaps you could create an R script taking as parameter the file to export and launch that script with the system() function =) Although I can't see clearly the advantages of such a process.
[edit] : After having tried it today, I realise my answer wasn't complete :
If you try the function browseURL on files such as "whateverRscript.r", it will display it in a tab of your browser, rather than trigger the download.
In order to actually make your browser download this kind of file, maybe you can zip it first.
To complete the automation process, just change the parameters of your browser such that it won't "ask everytimes where to stock the downloaded files"
This is what worked for me: run it on Server side. Working browser is required (I used Chrome)
my_data_file_name <- "data.RData"
# set file name
save(Data, file=my_data_file_name)
# save data to file
current_dir <- getwd()
# capture current working directory on server
my_export_file_path <- paste0(current_dir, '/', my_data_file_name)
# create a path for file to export
browseURL(my_export_file_path)
# export to local disk using browser's capabilities

csrun loses executable from .csx package

I am having a hard time with a seemingly simple Azure program.
My exercise is to create WorkerRole that spawns "helloworld.exe"
- which does just that - prints "hello world" and exits.
I am using Visual Studio to create a project,
then added new folder to project solution "bin2" where I put hello.exe
using menu option "Add Existing Item".
then created local storage bin2 in ServiceDefinition.csdef:
so I can find my executable with RoleEnvironment:
string baseDir = RoleEnvironment.GetLocalResource("bin2").RootPath.Replace('\', '/');
string command = Path.Combine(baseDir, #"hello.exe");
then ran cspack.exe to create .csx directory.
Resulting .csx package got hello.exe in the correct location:
WorkerRole1.csx\roles\WorkerRole1\approot\bin2\hello.exe
then I started local development fabric with csrun.exe and get error from the parent process that bin2/hello.exe is missing.
Do I need to do something else to make csrun to copy hello.exe into "bin2".
Any ideas?
Thank you in advance,
Ivgard
I'm pretty sure I answered this question already (probably on the MSDN forum)? But the local resource you declare will give you a path entirely different from where you're putting your hello.exe. When you add the file to your project, it gets included with the rest of the code for your role. When you look up the local resource, you get a path to an empty directory which you can use to write and read data. Those two are completely separate and unrelated locations.
If you want to find your hello.exe that's under bin2, just look for the relative path, or use %RoleRoot%\approot\bin2 (or maybe it's %RoleRoot%\approot\bin\bin2?).

Resources