We want to migrate our classic depot to stream depot.
Is it possible? Can anyone please share any docs for that ?
i can just import the classic depot project to stream depot, But , it wont get the history of files once i move it to streams depot. Is it possible to migrate it completely. Do we have any easy method to do that ?
Files can be duplicated between depots (with full history) via the p4 duplicate command. You can take advantage of this to migrate a "classic" branching structure into a stream depot, provided that the files are placed in locations that correspond correctly to the stream path definitions.
Since there is not an automated mechanism for translating classic branch paths to stream paths, classic->stream migrations are not recommended as typical practice, and are usually carried out under the supervision of a consultant who will have the expertise needed to understand both your existing branching structure and a hypothetical stream-based version of the same thing, and come up with a mapping for every depot file from one to the other.
The standard recommendation for companies moving to streams is to keep your existing projects in their existing classic depots, and start using streams with a new project so you're building your codeline structure in streams from scratch rather than trying to refactor it after the fact.
Related
I'm struggling to solve a problem with terraform :
The problem in short is related to collaboration between teams while writing infra code using the SAME state file exemple :
Infra team : network plumbing
App team: create VMs/k8s workloads
App team obviously takes some inputs from the infra team using data sources... until here everything is alright. Both teams use different git repos but same remote STATE file.
Because both teams uses the SAME state file, ressources created by one team ( example infra team) are marked for DELETION when App team tries to deploy something and vice versa.
can we prevent this, is using separate state files the only solution ?
thanks you
The short answer is Yes.
Separate state files is the best long-term solution. The other option that you could think about is workspaces but according to terraform documentation that's not a good idea:
When Terraform is used to manage larger systems, teams should use multiple separate Terraform configurations that correspond with suitable architectural boundaries within the system so that different components can be managed separately and, if appropriate, by distinct teams. Workspaces alone are not a suitable tool for system decomposition, because each subsystem should have its own separate configuration and backend, and will thus have its own distinct set of workspaces.
https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces
Not sure there is going to be a right or wrong answer for this one, but I am just interested how people manage Terraform in the real world? In terms of do you use modules, different environments and collaborations.
At the moment we are planning on having a production, dev and test environments. All similar.
Now at the moment I have made my terraform files in a way that define individual components of AWS, so say one for, VPC, IAM, EC2, Monitoring (CloudWatch + CloudTrail + CloudConfig) etc. And there is one variable file and .tfvars for the above, so the files are portable (all environments will be the same). So if you need to change something its all in one place. Also means if we have a specific project running I can create a tf file defining all the resource for the project and drop it in, then once its completed remove it.
Each environment has its own folder structure on our Terraform server.
Is this too simplistic? I keep looking at module.
Also does anyone have experience of collaboration with Terraform, as in different teams? I have been looking at things like Atlantis to tie into GitHub, so any changes need to be approved. But also at the sametime with the correct IAM role I can limit what Terraform can change.
Like I said may not be a wrong of right answer just interested in how people are managing terraform and their experiences.
Thanks
My answer is just an use case...
We are using terraform for an application deployed for several customers each having small specific configuration features.
We have only one CVS repository. We don't use CVS branches mechanism.
For each folder, we have remote states at least to share states between developers.
We are using one global folder having remote states also to share states between customers configurations
We are using one folder per customer and using workspaces (former environment) for each context for each customer (prod:blue/green, stage)
For common infrastructure chunks shared by all customers, we use module
We mainly use variables to reduce the number of specific files in each customer folders.
Hope this will help you...
In a microservice architecture, is it advisable to have a centralized collection of proto files and have them as dependency for clients and servers? or have only 1 proto file per client and server?
If your organization uses a monolithic code base (i.e., all code is stored in one repository), I would strongly recommend to use the same file. The alternative is only to copy the file but then you have to keep all the versions in sync.
If you share protocol buffer file between the sender and the receiver, you can statically check that both the sender and the receiver use the same schema, especially if some new microservices will be written in a statically typed language (e.g., Java).
On the other hand, if you do not have a monolithic code base but instead have multiple repositories (e.g., one per microservice), then it is more cumbersome to share the protocol buffers file. What you can do is to put them in separate repositories that can be added as an dependency to microservices that need them. That is what I have seen in my previous company. We had multiple small API repositories for the schema.
So, if it is easy to use the same file, I would recommend to do so instead of creating copies. There may be situations, however, where it is more practical to copy them. The disadvantage is that you always have to apply a change at all copies. In best case, you know which files to update, then it is just tedious. In the worst case, you do not know which files to update, and your schema will get out of sync. Only when the code is released, you will find out.
Note that monolithic code base does not mean monolithic architecture. You can have microservices and still keep all the source code together in one repository. The famous example is, of course, Google. Google also heavily uses protocol buffers for their internal communication. I have not seen their source code, but I would be surprised if they do not share their protocol buffer files between services.
I am currently considering whether I should be storing media in an apache cassandra database. The use case is that the site will be taking uploads from users for insurance claims and will need to store the files so that they cannot be accessed outside the correct permissions and at the same time they need to be able to be streamed. If I store them on a file system, I have to deal with redundancy backups and so on using file system based old tech. I am not really interested in dealing with a CDN because many of them are expensive but also I the permissions to the whether you can view the content depends on information in the app such as which adjuster is assigned to the case and so on. In addition I want to stream the files rather than require download and view which would be the default mode with requests against a CDN. If I put them in cassandra it will handle the replication, storage and I can stream the binary data out of the database to the user with integrated permissions. What I am concerned about is if I will run into problems with cassandra rows having huge HD video files that are sometimes 1 to 2 hours long (testimony).
I am interested in the recommendations of Cassandra users concerning this issue. How would to solve the problem. Any lessons you have learned that I can benefit from. Would you suggest anything specific about the video tables if I go with cassandra storage? Is there any CDN that will stream, not require download, allow me to plug in permissions and at the same time be open source?
Thanks a bunch.
Cassandra is definitely not designed and should not be used as an object store. I've worked on plenty of use cases where Cassandra was used as the metadata store alongside the object store/CDN and can complement them quite nicely.
Check out KillrVideo for inspiration: https://killrvideo.github.io/
This seems like a good key-value usecase for Streaming LOB support in Oracle NoSQL Database. You might want to look at this - http://docs.oracle.com/cd/NOSQL/html/GettingStartedGuide/lobapi.html
I'm new to Node, and was hoping to use Node.js for a a small internal webapp for managing workflow for product photos.
The image files are RAW camera files, and stored on a local NAS device.
The webapp should:
Have a concept of workflow, and be able to jump back/forth between states, and handle error states.
Watch certain directories for image files, and react to new files being added, or existing files being moved/removed.
Send out emails in response to events.
Scan photos for QR barcodes, and generate events based on these.
Rename photos based on user-defined batch patterns in response to events.
Questions:
Is Node.js a suitable tool for something like this? Why or why not?
Any libraries to help manage the workflow? I could only find node-workflow (http://kusor.github.io/node-workflow/) - curious for anybody's experiences with this? Alternatives?
Likewise for file watching? I saw many wrappers for fs.watch (e.g. https://github.com/mikeal/watch), as well as some alternatives (e.g. https://github.com/paulmillr/chokidar) - any advice for somebody new to Node?
Apart from using a NAS with a network filesystem, are there any other alternatives stores I can use for the image files?
I'm open to other alternatives here. I'm worried that the system will get confused or lose track of files.
The Node.js docs also mention that watching files on network file systems might be unreliable (http://nodejs.org/api/fs.html#fs_fs_watch_filename_options_listener). Are there more robust solutions?
Any other tips/suggestions for this project?
Cheers,
Victor
Node.js is a fine platform for building an application like the one you describe. The key question is file storage. You might find this post very interesting:
Storing Images in DB - Yea or Nay?
This other post enumerates a few interesting options for writing workflows:
Workflow engine in Javascript
Shameless add: I have been working on an event coordination library that you might find useful and interesting.
http://durablejs.org