I'm currently working with the perforce API for .net. I'm trying to understand the concept of streams and graphical view of the streams.
"Merging down" changes from a source stream and "Copying up" changes to a stream are the two terms I came across. I'm not really clear about the difference, because based on this only, the stream graph is getting generated.
This is the URL, which I'm looking into now. P4V streams.
Can someone please tell me what it means?
Merging involves blending files together.
Changes that conflict will need to be resolved.
Copying involves copying/replacing a file in one location with another.
Perforce Helix uses the 'mainline' model with Streams.
This involves merging down in to less stable streams (via the mainline) and copying up in to the more stable streams (again via the mainline).
You can find more information about this here:
https://www.perforce.com/video-tutorials/mainline-model
Hope this helps,
Jen.
Related
I know this has already been posted more than 10 years ago but I want to believe that some progress has been made on this side. (we have Deepfake nowadays, so much progress on the AI side).
I tried some tutorials with audacity but was highly disappointed with the result (to be fair the resulting output is not that bad, but not good enough for prod).
What reputable algorithm could I use to process myself a mp3 file and remove the vocals while preserving the drums and centered instruments, and removing vocal echo?
This task is known in the community as "Vocal Source Separation" or "Vocal Signal Separation" or "Singing Voice Source Separation", which are specialized "Music Source Separation" tasks, again an example of the more general "Source Separtion" task.
Here are some papers: Music Source Separation.
One of the most actively developed open source solutions is Spleeter, which has been used commercially in various audio products. There is an online tool based on it, you can try it out at Splitter.ai. The "2 stem" version will give you one track with vocals, and one track with everything else.
I don't have an opportunity to buy Cubase, but my partner uses it a lot. I wanted to simplify his life and provide him with cpr projects instead of plain wav files, but no other software can open/save this format.
I looked at a sample cpr he sent me and it seems like the file does not contain audio data itself, it rather contains the mark-up and effects.
I wanted to know the following things:
Is it legal to try to reverse-engineer cpr files?
Is it difficult and who tried?
If someone knows other ways to transfer project files between Audacity/Rosegarden and Cubase? The main thing is the support of several tracks and their timing in one project, nothing fancy.
Cpr files comes from a proprietary format. You can have a look on this question.
I suppose it is pretty hard... and I didn't tried !
To my knowledge, there is no way to export/import a project between cubase and Audacity or Rosegarden. The OMF format which could be a good candidate, is not supported by Audacity or Rosegarden for now. You can still import/export the audio mix, the separated tracks, and the midi files separately. This method is really fastidious, but it probably provides the advantage to let you play and edit your projects in the next decades, that isn't obvious with project files.
I have a project set up in a Perforce depot, with a single mainline stream.
I have some tools (CI, git p4) that can only sync a whole stream at once.
My project has lots of large files that aren't always needed (source art assets in a video game). I would like to allow those tools to only sync some of my depot.
Without streams, I would be able to set up a workspace which only included the parts of the depot I needed. With streams though, it looks like this mapping (the 'stream view') is part of the stream itself, and that if another view is wanted, another stream has to be created, and manually updated with changes from the mainline - which sounds awfully similar to branching, which isn't what I want to do.
What I'd like to know, is if there's some way of accomplishing this - multiple views on the same content - with streams.
Like in Bryan's comment, what you want is virtual streams.
https://www.perforce.com/blog/120221/new-20121-virtual-streams
I have been creating a text editor online, just for learning experience. I was curious what the best way to store multiple versions of a text file that is consistently changing is.
I've looked at a variety of options and I am yet to see a cheap, and scale-able option.
I've looked into Google Cloud Storage and Amazon S3. The only issue is that too many requests to save the file start to add up a lot in cost. I'd like files to be saved practically instantly, and also versioned every so often. I've also looked into data deduplication which looks like a great option, but I have not yet found a way to do it without writing my own software.
Any and all advice would be greatly appreciated. Thanks!
This is a very broad question, but the basic answer is usually some flavor of Operational Transform. Basically you don't want to be constantly sending the entire document back and forth between the user(s) and the server, nor do you want to overwrite the whole of the document repeatedly. Instead you want to store diffs. Then you need to deal with the idea that multiple users might be changing the file simultaneously, but possibly in different areas, and dealing with that effectively.
Wikipedia has some good, formal discussion of the idea: https://en.wikipedia.org/wiki/Operational_transformation
You wouldn't need all of that for a document that will only be edited by one person at a time, but even then, the answer is to think in terms of diffs from previous versions and only occasionally persist whole snapshots.
In follow up to my previous question, I am able to save the backbuffer of a Direct3DDevice to a surface.
I already found the D3DXSaveSurfaceToFile in the DirectX SDK and it works fine! However, I want to record the obtained surfaces to an AVI file.
I also found the AVIFile reference but they are obviously not straight up compatible with DirectX surfaces.
What would be the best way of approaching this problem? I've seen a number of GDI+/MVC based solutions of grabbing HDCs but those are out of the question. I'm also not sure what kind of data AVIFile expects and how to extract it from the D3DSurface.
Please advise! :)
edit:
Post-processing is also an option. I can capture the surface data in a number of formats, specified here, into memory with D3DXSaveSurfaceToFileInMemory. Afterwards, I could compress this data and then store to disk.
How should I be compressing my data? How should I be storing it? Do I store a timestamp along with it? After recording, how should I turn the generated data into an AVI file?
The source code at this link will show you how to do it:
http://gpalem.web.officelive.com/SimulationRecording.html
Edit: Well you don't have to do things exactly like that linked code. You have a D3DSurface so you can just lock it grab the bits and pass them into CAviFile::AppendFrameUsual ... If you want to change its format then use D3DXLoadSurfaceFromSurface. I didn't say the link was a perfect solution but it DOES show you how to write the frames into an AVi file.
Edit2: As I didn't answer your edit I should do. Firstly don't bother with compression until you have got uncompressed working. Compression is a significantly more complicated thing to get right and you won't be able to get proper compression by simply using the various D3DXSurface copying functions. They don't support the kinds of compression you are after. D3DX is for 3D rendering and NOT for video compression.
For video compression you are best off using DirectShow as you can, simply, add any compressors you wish. This will however mean you'll need to write a "source filter" that you can build your graph off. DirectShow is not an easy thing to use but its very powerful. As far as writing the "source filter" goes you can check out the "Push Source" example in the windows SDK. You will need to adapt it to take the data you are retrieving however.
As an aside, going further on my original edit you could use that code as is by intercepting more D3D9 calls. If you hook the SetRenderTarget calls then you can insert whatever render target you like in there and use the, previously, linked code directly ...