I am new in Hybris 6, i would like to create a simple eCommerce store where i can select products and check out as normal using the basic recipe. I would like to know how to do that from END to END. I know the whole theory and i have the material the only thing i am interested in in the various steps i must follow to create a fully functional simple store. Currently i can do the data modelling and classifications, import data using ImpEx, I can view and customize the available electronics and apparel stores however the problem is i want to create mine from scratch.
Use the modulegen ant target, it creates all you need for your future store implementation.
ant modulegen
accelerator
myshop
com.mycompany.myshop
Related
I'm new to NLP. I am looking for recommendations for an Annotation tool to create a labeled NER dataset from raw texts.
In details:
I'm trying to create a labeled data set for specific types of Entities in order to develop my own NER project (rule based at first).
I assumed there will be some friendly frameworks that allows create tagging projects, tag text data, create a labeled dataset, and even share projects so several people could work on the same project, but I'm struggling to find one (I admit "friendly" or "intuitive" are subjective, yet this is my experience).
So far I've tried several Frameworks:
I tried LightTag. It makes the tagging itself fast and easy (i.e. marking the words and giving them labels) but the entire process of creating a useful dataset is not as intuitive as I expected (i.e. uploading the text files, split to different tagging objects, save the tags, etc.)
I've installed and tried LabelStudio and found it less mature then LightTag (don't mean to judge here :))
I've also read about spaCy's Prodigy, which offers a paid annotation tool. I would consider purchasing it, but their website only offers a live demo of the the tagging phase and I can't access if their product is superior to the other two products above.
Even in StackOverflow the latest question I found on that matter is over 5 years ago.
Do you have any recommendation for a tool to create a labeled NER dataset from raw text?
⚠️ Disclaimer
I am the author of Acharya. I would limit my answers to the points raised in the question.
Based on your question, Acharya would help you in creating the project and upload your raw text data and annotate them to create a labeled dataset.
It would allow you to mark records individually for train or test in the dataset and would give data-centric reports to identify and fix annotation/labeling errors.
It allows you to add different algorithms (bring your own algorithm) to the project and train the model regularly. Once trained, it can give annotation suggestions from the trained models on untagged data to make the labeling process faster.
If you want to train in a different setup, it allows you to export the labeled dataset in multiple supported formats.
Currently, it does not support sharing of projects.
Acharya community edition is in alpha release.
github page (https://github.com/astutic/Acharya)
website (https://acharya.astutic.com/)
Doccano is another open-source annotation tool that you can check out https://github.com/doccano/doccano
I have used both DOCCANO (https://github.com/doccano/doccano) and BRAT (https://brat.nlplab.org/).
Find the latter very good and it supports more functions. Both are free to use.
I'm new to azure and am trying to train a translator model. When creating a project, it is asked to choose a source language and a target language. In the list, it can be seen that L1->L2 can be taken but also L2->1. From this raises my question: if I want a model that can translate from one language to another interchangeably L1<->L2, do I need to train 2 models ? One L1->L2 and the other L2->L1. Training is quite expensive and having to do it 2 times seems unpractical.
Each Custom Translator project represents a translation system in one direction. You may choose to build one direction or both directions. You can still translate in both directions if you have built the system only in one direction, Translator will use your custom system in the direction you have built it, and the generic system in the other.
All projects in your workspace access the same data store, so that you can use the same set of documents building a translation system in either direction.
I have a data set of about 1 million employer names. These names are from a free-form text field so they include misspellings and variations in the way they are inputted (e.g. "Amazon" .. "Amzaon" .. "Amazon.com" .. "Amazon Web Services" .. "AWS").
I want to either A) group these 1 million so I have a somewhat accurate sense of how many unique employers are in the data set or B) be able to find all variations of any given employer.
So far, I've been using the data in Tableau, then filtering on "employer name" and searching all variations of the name I can think of. But it's tedious and I'm pretty sure I'm leaving many out.
I've also used the fuzzy add-in for excel but it hasn't worked that well on misspellings, special characters...
Tableau just isn't suited for doing this kind of analysis straight out of the box, and I would highly recommend doing some pre-processing on your data before putting trying to build a workbook around it.
Like another commenter said, you could look into using Tableau Prep Builder for a one-time transformation on your data set, but if you wanted to automate this process it costs extra to add functionality to whatever Tableau Server installation you have.
If you're familiar with Python or R (and the integration between Tableau Server and those services is supported by your organization), you could look into building a script to run the transformation real-time, but it probably won't be too efficient.
Try experimenting with Tableau Prep Builder - the companion tool that comes with your Tableau Creator license. It has a group feature that is designed for just these problems.
In Prep Builder, you’ll just need to connect to your data, add a cleaning step, and then add a group to your cleaning step.
I have just started on a project which his regulatory in nature and the business area of the IB I work with uses ActivePivot to manage their securities (inventory).
One of the tasks we need to do is that the ActivePivot data set and run some sort of simple rules engine over the data that feeds ActivePivot. There is a little bit of netting involved at the transactional level but it's mostly simply rules using basic operators. I haven't used ActivePivot before but the users are telling me it doesn't really allow them to add fields within the cube which I understand from a technical perspective. I also noted that ActiveViam have a product called ActiveUI which on the surface appears to do this?
Has any one any tips/advice on what worked for them? The business also want a better data visualisation tool (graphs and the likes).. I was looking at tableaux but open to suggestions. Many thanks for any help given.
There is no clear question here so I will answer to your different points one by one:
run some sort of simple rules engine over the data that feeds ActivePivot
Then you can add your rule engine in your project on the data set before feeding ActivePivot as if you were not using ActivePivot afterwards.
users are telling me it doesn't really allow them to add fields within the cube
you cannot add fields once the cube is started but you can update the description of your cube in your project to integrate the new fields brought by your new logic.
I also noted that ActiveViam have a product called ActiveUI which on the surface appears to do this?
ActiveUI is a UI for the ActiveViam products (including ActivePivot), so it provides you (among others) tables, charts to navigate your data.
The business also want a better data visualisation tool (graphs and the likes).. I was looking at tableaux but open to suggestions
ActiveUI can provide you this. ActivePivot follows the standard for OLAP databases (XMLA) so it is also compatible with other XMLA clients like Excel and Tableau. Your BI has probably already chosen which client they would use so you should see with them.
I need certain custom entity fields to calculate and display values based on operations on the data in the system.
For example an a booking system implementation with contacts and custom entity: tickets. There is a one-many relationship between contact and tickets.I would like to create a field that calculates and displays in the contact form:
frequent flyers: more than 10 tickets bought.
a field that displays yes or no based on whether a first class ticket has ever been purchased. Ticket ref would start with say, FCxxx
If this isn't possible perhaps someone could suggest an alt method for displaying this info?
This is possible and you have some ways to do that: Workflow or Plug-in.
If you make a lot of calculations i think the best way is doing a plug-in. You can register in post create event of tickets entity and there you can make all this calculations and update the custom fields of contact entity.
You can check some tutorials about developing a plug-in:
http://mscrmshop.blogspot.pt/2010/10/crm-2011-plugin-tutorial.html
http://msdn.microsoft.com/en-us/library/gg695782.aspx
http://crmconsultancy.wordpress.com/2010/10/25/plugins-in-crm-2011/
Specific information about registering a plug-in:
http://msdn.microsoft.com/en-us/library/hh237515.aspx
In SDK you can find more examples.
As far I'm aware, it's not possible to achieve without coding. So, if you're looking for a way to customize it by mousing, you might be just out of luck.
If you wish to display that information upon retrieval of the a customer, it's probably fastest to get it using JavaScript. You can add a custom script to onload event. However, that means that you'll have to write JavaScript so if you're not into coding you'll have problems.
If you do know how to code, perhaps creating a plugin with C# is the most preferred way (that's what I'd do at least). The advantage of that lies in extensibility, should you realize that you wish to perform more operations.
Also, if you wish to store the computed values, you'll have to go with a plugin. Otherwise, only GUI operations will perform the computations. If a program will enter/retrieve data in the background, you can't rely that the values will be computed, unless you listen to the messages of Retrieve, Create etc.