List all viewflow processes the user is allowed to - django-viewflow

I'm trying to implement django-viewflow in my project, using django-admin as a GUI.
I'm currently trying to create a custom view and relative template to show a user a list of all the processes that he can start,
so not the process instances but a list of process models he's allowed to see.
Is it possible? I tried using the ProcessListView but it requires a flow_class, while I'd like to see all flows the user is allowed to.

You can get available process instances with Process.objects.filter_available([flow_class1, flow_class2, ...], user)

Related

How to mock model objects in SAP Hybris?

while writing integration tests in SAP Hybris, I am getting exceptions which imply that the model objects are not available during the test case.
It seems like normally the ImpEx which run normally during initialization are not running here.It is becoming hectic creating objects using model service.
Is there some other way?What about the custom objects defined by me in the product(like ABCProduct extending Product) and also values for them?Is it possible to mock them as well?What about BaseSite and PriceRow?
There are some things you need to know about the test system.
Tenants
Usually you work with the master tenant. The test system however has its own tenant called junit. A tenant is meant to be kind of like a separate data set for the hybris server running the same code. That way you can run different shops on the same infrastructure but every shop can only access the data that is meant for the tenant. How does it work? Every tenant has a table prefix, only the tenant master has an empty one. So the products table for the master tenant is called "products", but the "products" table for the junit tenant is called "junit_products".
Further reading: https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/1905/en-US/8c14e7ae866910148e59ebf4a2685857.html
Initialization
When you use the initialization using ant initialize or the admin console, you usually only initialize the master tenant. When you want to initialize the junit tenant, you need to either change to the junit tenant in the admin console or run ant initialize -Dtenant=junit. However this creates only the most basic data.
More on how to execute initialization in admin console in section "Executing Tests": https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/1905/en-US/aae25ecb74ab4bd69cc5270ffd455459.html
Creating test data
There are a few classes you can inherit from, to create an integration test, but only ServicelayerTest provides methods to create sample data. All those methods import impex files located in /hybris/bin/platform/ext/core/resources/servicelayer/test/
createCoreData() Creates languages, currencies, units etc. See: testBasics.csv
createDefaultCatalog() Creates a sample product catalog with an online catalog version and basic sample products. See: testCatalog.csv
createHardwareCatalog() Creates a sample product catalog with staged and online version, products and classifications. See testHwcatalog.csv and testClassification.csv
createDefaultUsers() Creates sample customers with addresses etc. See testUser.csv
Importing custom data
To import data not covered by the ServicelayerTest methods, I recommend one of two approaches.
Using ModelService and other services to create your data. E.g. you can use the OrderService to create sample orders. You can as well create utility classes that provide you with creating sample data. You can wire every service you need by annotating it with the #Resource annotation.
Using impex files to create all data you need. You can split those up into different files that serve different needs (e.g. customers, orders, products...). The method importCsv(String pathToFile, String encoding) in ServicelayerTest provides you the opportunity to import those.

Should servers with similar roles have one role with different profiles, or different roles per server?

I'm struggling to get my head around Puppet, and to make matters worse I'm using Red Hat Satellite 6 which adds additional layers of complexity.
I'm currently building a grpahite graphing solution. There are three types of server (relay - receives the data, cache - stores the data, graphs - runs Grafana and talks to the caches).
I have two different colleagues telling me to do it two different ways. My first available way is to create a 'role_graphing', then have 'sub-roles' such as role_graphing::relay, and so on. My second available way is to just have a role per server.
I've currently gone with the first method, and my init.pp looks like:
class role_graphing {
include profile::graphing_base
}
class role_graphing::relay inherits role_graphing {
include profile::carbon_c_relay
}
class role_graphing::cache inherits role_graphing {
include profile::carbon_cache
include profile::carbon_c_relay
include profile::graphite_web
include profile::memcached
}
class role_graphing::graph inherits role_graphing {
include profile::graph
}
And then in my manifests folder, I have a profile_relay.pp, profile_cache.pp and so on. Each profile simply installs the required packages from Yum or the Forge, and then configures them.
Am I going about this the 'right' way?
Instead of directly advising you, I will describe the intent Puppetlabs has for these terms followed by an example so you will see it generally.
Modules are collections of manifests, functions, files, templates, etc.
Profiles are collections of one or more modules.
Roles are collection of one or more profiles.
Servers are collections of one or more roles.
Example:
localhost.localdomain contains role application_server
role application_server contains profiles lamp and web_apps
profile lamp contains apache, mysql, and php modules
profile web_apps contains app_one and app_two modules
apache module contains https://forge.puppet.com/puppetlabs/apache

Changing operation flow in nodejs

I am trying to create an ecommerce website in nodejs.I want it to be modular so that we can add extensions later without editing the main codebase. For example suppose I have an extension which checks if a user is requester or approver, and if he is an approver he can checkout, otherwise a approval request will be sent to corresponding approver.Suppose I emit an event when a checkout is made, then that extension can catch it and process it. But at the same time I want the normal flow to be changed. How can I do that? Should I create a checkout module extending original checkout module and override functions and make sure that extension's module is loaded ? If I do it there will be problem if two different extensions are adding features to same core module.What is the best way to do it ?
Generally speaking, there are two ways widely used to extend a web app :
Webhooks
Api
Both have their pros and cons.
What you are trying to do is possible in hook style, because the code will be execute on the server itself and you can extend some objects and modify their behavior as you want.

Liferay model listener ordering

Following is my requirement :
Whenever site is created, with help of GroupListener we are adding some custom attributes to created site.
So assume that you are crating "Liferay Test" site then it will have some fix custom attribute "XYZ" with value set in GroupListeners onAfterCreate method.
I can see this value in custom fields under site settings.
Now based on this values, we are creating groups in another system(out of liferay, using webservices).
So far so good.
Whenever we are deleting the site we need to remove the equivalent groups from other system via web service.
But while deleting site, in GroupListener we are not able to retrieve the custom attributes.
On further debug by adding expando listener, I observed that Expando listeners are getting called first and then delete method of GroupLocalService/GroupListener.
And hence we are not able to delete the groups present in another system.
So I was wondering if we can have ordering defined for listeneres.
Note: Since we were not getting custom attributes in listeners we implemented GroupLocalServiceImpl and with this we are getting custom attributes in delete method on local environment but not on our stage environment which has clustering.
You shouldn't use the ModelListeners for this kind of change, rather create ServiceWrappers, e.g. wrap the interesting methods in GroupLocalService (for creation as well as deletion).
This will also enable you to react to failures to create records in your external system etc.

Get 2 userscripts to interact with each other?

I have two scripts. I put them in the same namespace (the #namespace field).
I'd like them to interactive with another.
Specifically I want script A to set RunByDefault to 123. Have script B check if RunByDefault==123 or not and then have script A using a timeout or anything to call a function in script B.
How do I do this? I'd hate to merge the scripts.
The scripts cannot directly interact with each other and // #namespace is just to resolve script name conflicts. (That is, you can have 2 different scripts named "Link Remover", only if they have different namespaces.)
Separate scripts can swap information using:
Cookies -- works same-domain only
localStorage -- works same-domain only
Sending and receiving values via AJAX to a server that you control -- works cross-domain.
That's it.
Different running instances, of the same script, can swap information using GM_setValue() and GM_getValue(). This technique has the advantage of being cross-domain, easy, and invisible to the target web page(s).
See this working example of cross-tab communication in Tampermonkey.
On Chrome, and only Chrome, you might be able to use the non-standard FileSystem API to store data on a local file. But this would probably require the user to click for every transaction -- if it worked at all.
Another option is to write an extension (add-on) to act as a helper and do the file IO. You would interact with it via postMessage, usually.
In practice, I've never encountered a situation were it wasn't easier and cleaner to just merge any scripts that really need to share data.
Also, scripts cannot share code, but they can inject JS into the target page and both access that.
Finally, AFAICT, scripts always run sequentially, not in parallel. But you can control the execution order from the Manage User Scripts panel

Resources