Is there a way to create a review in Swarm to files without changing them?
My Use Case:
I was asked to review an exist module in the project. I want to make an online review using Swarm and add comments.
Update
From Swarm Documentation:
Post-commit model
The post-commit model can be used if your team's development processes preclude the use of shelving. Code must be committed to the Perforce service before code review can begin, which reduces the opportunity to fix problems before, for example, a continuous integration system notices problems. However, code reviews can be started for any existing code regardless of how long it has been committed.
I think that documentation is talking about having the CL committed first and later review it. I'm talking about an exist code that was developed over period of time and with multiple CLs that need a complete review.
Related
MAXIMO Asset Management (MAM) version: 7.6.1.2:
In Work Order Tracking, I can enable flow control on a work order (WO.FLOWCONTROLLED=1).
I'm trying to figure out what happens behind the scenes when flow control is enabled -- so that I can understand how it might impact other processes (i.e. wokflow). For example, by doing some random tests, I've observed that it does the following:
WO can't be changed to complete until all tasks are complete
When the user completes all tasks, the WO automatically changes to complete
It's possible that it does other things too -- but I have no way of knowing.
I can't find any specific information in the documentation about what actually happens when WO.FLOWCONTROLLED=1. I've also asked IBM support, but haven't gotten a clear answer there either.
What happens when WO.FLOWCONTROLLED is enabled?
The following link should help clarify how the Flow Control Feature works and is configured in Maximo:
Understanding and Configuring Maximo’s Flow Control Feature
we are using Azure DevOps as our ALM system. When a user story or bug fix is resolved, it shows in a public query - like a stack - where our QA team members subsequentially pull tickets independently for verification. As this is part of a pull request review, a PR can not be merged unless QA finished testing. So we aim for fast response times and parallelization of testing to minimise the potential of merge conflicts. Often times, we find that multiple work items are self-assigned to the same people, while other team members do not have work items assigned, increasing the potential response times for our devs (unless people change assignments) and leading to a rather subsequential then parallel verification of work items
So we are looking for a way in Azure Dev Ops that allows us to make sure that members of a certain user group can only be assigned one work item of certain work item type and state at the time. We looked into Custom Rules in detail but failed to get anything like this out of it. I'm thankful for any ideas and hints on how this can be accomplished (extensions also welcome)
There is no such rule or policy in Azure DevOps.
And it won't prevent someone from working on it anyway to be honest... I assume testing multiple changes in a single go isn't an option? It would simplefy things tremendously...
The bigger goal:
Writing a batch user manager targeted at classroom school environments.
The problem
I want to write a user manager that uses a GUI to add, manage and delete users for classroom environments. The program I'm working on is ltsp-manager.
Up until now all the user management is done by executing bash commands. From a python script. Meaning all the GUI has to run as root and everything is handcrafted.
The goal
Create a Dbus service that handles all the account management and let the GUI run as a regular user requiring a password from time to time.
I looked around and found that in org.freedesktop.Accounts there is already a service doing a lot of the functionality I want to do. However, it also lacks some. Something that is totally missing is the management of Groups.
What is a good way to use the org.freedesktop.Accounts functionality and add some additional functions/methods?
Thoughts so far
Things that came to my mind include:
just redo everything - meaning a lot of duplicated work.
copy the interfaces and write functions that call the original ones
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
All my testing experiments are done with python3 and pydbus which seems to be the best choice among many.
I have never written a real world dbus service - though the experiments do show some results in d-feet. This question is not really a what do I need to type kind of question but rather a best practise question.
The best long-term answer would be to fix accountsservice upstream to implement groups support. There’s already work towards that; it just needs someone to pick it up and finish it off. accountsservice is the project which provides the canonical implementation of org.freedesktop.Accounts.
The other approaches are bad because:
just redo everything - meaning a lot of duplicated work.
As you say, this is a lot of duplicated work, and then you have to maintain it all.
copy the interfaces and write functions that call the original ones
That means you have to forever keep up with changes and additions to accountsservice.
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
That doesn’t come with any additional maintenance problems, but means your service won’t integrate well with accountsservice. There might be race conditions between updates on your D-Bus objects and updates on the accountsservice objects, for example. You won’t be able to share the maintenance burden of the groups code with the (many) other users of accountsservice.
Say I've got a \\Repo\... repo. Currently devs generally tend to do all their work directly in there, which normally isn't a problem for small pieces of work. Every so often, this approach fails for various reasons, mainly because they're unable to submit the incomplete change to Live.
So, I was wondering, is there a way to enforce on the server that:
1) no files can be directly checked out from \\Repo\...
2) users then branch to a private area (\\Projects\...)
3) dev, test, submit, dev, test, submit, ...
4) on dev complete, they can re-integrate back into \\Repo\...
I guess the last part is the problem, as files need to be checked out! Has anyone implemented something similar? Any suggestions are much appreciated.
There is no way (that I know of) to enforce this type workflow in P4. You could try to enforce it by setting commit triggers, restricting permissions, or locking files however I believe it would only result in more work (micro-management) and frustrate you and your team.
The best way to establish and enforce any SCM workflow is to set as company/studio policy. Your team should be responsible/able to follow the set procedure and determine (by themselves or through discussion) if an issue is able to be fixed in the main line.
One note about the proposed workflow; creating a new branch for every issue will eventually cause issues and at some point you will need to perform maintenance on the server to conserve disk space and depot browsing speed.
For more information (over) branching on Perforce read this Perforce blog entry from 2009: Perforce Anti-Patterns Part 2: Overuse of branching.
In many studios using Perforce, most developers have their own "working" branch which they continually re-use whenever there are changes that are not safe or able to be performed in the main line.
if i understand your questions properly, you should try with shelving features and working offline features of Perforce. Process is main thing to achieve success in this senario. So you might need to setup a right process to execute this.
For more Info about shelving and working offline with perforce, you can try following links...
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html
This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
I am working in a small startup organization with approximately 12 - 15 developers. We recently had an issue with one of our servers where by the entire server was "Re provisioned" i.e. completely wiped of all the code, databases, and information on it. Our hosting company informed us that only someone with access to the server account could have done something like this - and we believe that it may have been a disgruntled employee (we have recently had to downsize). We had local backups of some of the data but are still trying to recover from the data loss.
My question is this - we have recently began using GitHub to manage source control on some of our other projects - and have more then a few private repositories - is there any way to ensure that there is some sort of protection of our source code? What i mean by this is that I am aware that you can delete an entire Project on GitHub, or even a series of code updates. I would like to avoid this from happening.
What i would like to do is create (perhaps in a separate repository) a complete replica of the project on Git - and ensure that only a single individual has access to this replicated project. That way if the original project is corrupted or destroyed for any reason we can restore where we were (with history intact) from the backup repository.
Is this possible? What is the best way to do this? Github has recently introduced "Company" accounts... is that the way to go?
Any help on this situation would be greatly appreciated.
Cheers!
Well, if a disgruntled employee leaves, you can easily remove them from all your repositories, especially if you are using the Organizations - you just remove them from a team. In the event that someone deletes a repository maliciously that still had access for some reason, we have daily backups of all of the repositories that we will reconstitute if you ask. So you would never lose more than one day of code work at worst. Likely someone on the team will have an update with that code anyhow. If you need more protection than that, then yes, you can setup a cron'd fetch or something that will do mirrors of your code more often.
First, you should really consult github support -- only they can tell you how they do the backup, what options for permission control they have (esp. now that they introduced "organizations") etc. Also you have agreement with them -- do read it.
Second, it's still very easy to do git fetch by cron, say, once an hour (on your local machine or on your server) -- and you're pretty safe.
Git is a distributed system. So your local copy is the same as your remote copy on Git hub! You should be OK to push it back up there.