perforce linux server in case-insensitive mode - perforce

Was setting up a test perforce server on linux, and found out its case sensitive. How can I switch the server to case insensitive? Is this possible? Would the server need set up again?

Ideally, you set the case sensitivity when the server is set up for the first time, and don't change it when there's already data in it. Since you're still in the testing stage, I'd assume it's fine to just blow away your test data and start over from scratch.
Converting existing data is possible, with a bunch of caveats that are detailed in this article: https://community.perforce.com/s/article/1200

Related

How to copy local MLflow run to remote tracking server?

I am currently tracking my MLflow runs to a local file path URI. I would also like to set up a remote tracking server to share with my collaborators. One thing I would like to avoid is to log everything to the server, as it might soon be flooded with failed runs.
Ideally, I'd like to keep my local tracker, and then be able to send only the promising runs to the server.
What is the recommended way of copying a run from a local tracker to a remote server?
To publish your trained model to a remote MLflow server you should use 'register_model' API. For example, if you are using spacy flavor of MLflow you can use as below, where 'nlp' is the trained model:
mlflow.spacy.log_model(spacy_model=nlp, artifact_path='mlflow_sample')
model_uri = "runs:/{run_id}/{artifact_path}".format(
run_id=mlflow.active_run().info.run_id, artifact_path='mlflow_sample'
)
mlflow.register_model(model_uri=model_uri, name='mlflow_sample')
Make sure that the following environment variables should be set. In below example S3 storage is used:
SET MLFLOW_TRACKING_URI=https://YOUR-REMOTE-MLFLOW-HOST
SET MLFLOW_S3_BUCKET=s3://YOUR-BUCKET-NAME
SET AWS_ACCESS_KEY_ID=YOUR-ACCESS-KEY
SET AWS_SECRET_ACCESS_KEY=YOUR-SECRET-KEY
I have been interested in a related capability of copying runs from one experiment to another for a similar reason, ie set one area for arbitrary runs and another into which the results for promising runs that we move forward with are moved. Your scenario with separate tracking server is just the generalization of mine. Either way, apparently there is not a feature for this capability built-in to Mlflow currently. However, the mlflow-export-import python-based tool looks like it may cover both our use cases, and it cites usage on both Databricks and the open-source version of Mlflow, and it appears current as of this writing. I have not tried using this tool yet myself though - if/when I try it I'm happy to jot a follow-up here saying whether it worked well for this purpose, and/or anyone else could do same. Thanks and cheers!

P4HOST problems

I am quite new to perforce and I am facing an issue concerning the P4HOST value.
Here's the situation : I have one, let's say, classic setup with a connection, a workspace name, etc. and a host set to the local machine name. Everything works perfectly.
I have another connection with quite the same but the host should not be the local machine name to connect to the correct server. If I set the host to my local machine I am referring to a bad server. If I set the host in p4v I get this error : Client can only be used from host. and it breaks everything for this setup.
To fix this I tried to set the host value manually with this command : p4 set P4HOST=myhost and it works well unless I can't access my other repositories because, I think, it's a global value and as other configurations are not using a specific host it fails.
Anyway, according to my configurations what can I do ? Is it possible to manually set P4HOST for a specific setup without affecting everything ? Is there another way ?
Thank you very much !
Edit : I don't know if this is useful but the classic host I am using is like myname-PC and the other one which is failing is something like apath/toanotherpath
P4HOST's job is to keep you from using the same workspace from different machines. If you use the same workspace from different machines, you're going to have a bad time. (Why exactly is its own topic -- for purposes of this answer, just take my word for it that you do not want to use one workspace from different client machines. Dead rising from their graves, cats and dogs living together, that kind of thing. Bad time.)
When you create a workspace, its Host: value is set to your current P4HOST value (which defaults to the client machine hostname). If you try to use that workspace with a DIFFERENT host value, it's a strong clue to the server that you're trying to use it from more than one machine (which, as established, is a Bad Time), and so the server gives you that error (to try to stop you before you have a BAD TIME).
So it sounds like this workspace that you're trying to use was created on a different client host machine -- which means that using that workspace is probably going to lead to a bad time. Create a new workspace for the client machine that you're on.
Alternatively (and only if you're really sure it's the right thing to do), you can change the Host in that workspace to match your current machine. Note that if you find yourself having to do this more than once, you're probably in the process of generating a bad time for yourself.

Running IIS server with Coypu and SpecFlow

I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.

Linux: What should I use to run terminal programs based on a calendar system?

Sorry about the really ambiguous question, I really have no idea how to word it though hopefully I can give you more detail here.
I am developing a project where a user can log into a website and book a server to run a game for a specific amount of time. When the time is up the server stops running and the players on the server are kicked off. The website part is not a problem, I am doing this in PHP and everything works. It has a calendar system to book a server and can generate config files based on what the user wants.
My question is what should I use to run the specific game server on the linux box with those config files at the correct time? I have got this working with bash scripts and cron, but it seems very un-elegant. It literally uses FTP to connect to the website so it can download all the necessary config files and put them in a folder for that game and time. I was wondering if there was a better way of doing this. Perhaps writing a program in C, but I am not sure how to go about doing this.
(I am not asking for someone to hold my hand and tell me "write this code here", just some ideas of a better way of approaching this problem)
Thanks so much guys!
Edit: The webserver is a totaly different machine. I would theoreticaly like to have more than one game server where each of them "connects" (at the moment FTP) to the webserver, gets a file saying what it has to do at a specific time and downloads any associated files then disconnects.
I think at is better suited for running one time jobs than cron.
For a better approach for the downloading files etc, you should give more details on your setup (like, the website and the game server, are they on the same machine? Or the same network? etc etc.
You need a distributed task scheduler. With that, you can:
Schedule command "X" to be run at a certain time.
Specify the machine (or ask it to pick a machine from a pool of available machines)
Webserver would send request to this scheduler via command line or via web service when user selects a game server and a time.
You can have a look at : http://www.acelet.com/super/SuperWatchdog/index.html
EDIT :
One more option :http://jobscheduler.sourceforge.net/

Faster clean Perforce sync over VPN

I have to regularly do a clean Perforce sync to new hardware/virtual machines over the VPN. This can take hours as the project is quite large. Is there a way that I can simply copy an up-to-date tree from an existing client and tell Perforce to use this tree?
The Perforce Proxy is the right way to go, but if you really want to, there is a way to do what you asked via the sync command, with the -k switch:
The -k flag bypasses the client file
update. It can be used to make the
server believe that a client workspace
already has the file. Typically this
flag is used to correct the Perforce
server when it is wrong about what
files are on the client, use of this
option can confuse the server if you
are wrong about the client's contents.
p4 sync -k //depot/someProject/...
You can also use flush, which is a synonym for sync -k:
p4 flush //depot/someProject/...
Just be careful. Remember those last words, "...use of this option can confuse the server if you are wrong about the client's contents."
Perforce Proxy is almost definitely the way to go, assuming you can dedicate a local machine for this purpose.
A useful tip for a Proxy is to get it to refresh its contents overnight, just by creating a dummy client (perhaps on the proxy machine), and kicking off a nightly task to do a sync - a normal sync will do, does not need to be a clean one. This will ensure any big changes people have checked in won't necessarily cause a massive lag the first time you need to do a local sync.
Note that you need a live VPN connection between the proxy and the server - the proxy still has to talk to the server to determine if it has the right versions cached. So the proxy needs a reasonably low latency link to the server, but at least you don't have to wait for actual file transfer.
Another alternative you may want to try is to use the compress option in your client specs (workspaces). This tells the server to compress each file before it gets sent, and your p4 client will decompress automatically. The trade-off here is CPU time on both the server and the client. However, given you want to sync several local clients, I think proxy will ultimately be the better solution.
No, but you shouldn't need to: Why do you need to do a clean perforce sync? What's wrong with a normal sync? If you need to clean the tree, then why not work on a copy of the tree?
One alternative might be to run a p4proxy on your end of the VPN connection, then unchanged files won't have to be transferred over the VPN.
If you only require an export - that is you don't need to keep it up-to-date or submit changes from it, then you could simply copy an existing checkout, and never use perforce against that tree. But I don't know anyway of convincing perforce server that you have a checkout without p4 actually checking out the files.

Resources