How can I add, delete, and update resources in CLR assemblies? - resources

I've taken a long look around and can't find any information on altering managed resources in assemblies (note that I'm already familar with Win32 resources and the APIs for altering those).
My application has resources that need to be updated by the end user and the application will be distributed as a single executable (so I can't just use satellite assemblies) .
I see a few possible workarounds, but they seem hackish:
The first is to use ILMerge: I'd create a new assembly in-memory which contains the new resources and use ILMerge to combine it with the original assembly to form the new program. The only downside is that resources cannot be updated or deleted.
The second is somewhat similar: There would be a .netmodule (emited from the C# compiler) which is ran against al.exe with the /embed switch to add the resources to form the new assembly. The downside being that none of the resources in the original assembly would be present.
I'm leaning towards the ILMerge option, but the terms on redistribution are ambiguous. The EULA makes no reference to redistribution rights (so I assume in this Negative Freedom society that it's permitted) yet the Microsoft Research page says redistribution is not permitted (but it's ambiguously worded, from what I can tell it might be referring to commercial redistribution, which wouldn't apply to me since this is a non-profit GPL project).
Thanks

IMHO, I don't think it is a good idea to do it anyway. If this resources are actually user data, even if there is a "preinstalled" set of it, it does not belong to a embedded resource.
Are you're assemblies signed? You would have to resign them after changing, your private key is exposed and everyone can sign your application. So it's not worth to sign it and you have a security risk anyway.
Move your resources to an external file. You can still embed the "predefined" resources. The first time your application starts, you create the external file and copy the embedded resources to the external file. If the external file exists, you don't care about the embedded resources anymore.

Related

Assembly security

I'm currently offering an assembly compile service for some people. They can enter their assembly code in an online editor and compile it. When then compile it, the code is sent to my server with an ajax request, gets compiled and the output of the program is returned.
However, I'm wondering what I can do to prevent any serious damage to the server. I'm quite new to assembly myself so what is possible when they run their script on my server? Can they delete or move files? Is there any way to prevent these security issues?
Thank you in advance!
Have a look at http://sourceforge.net/projects/libsandbox/. It is designed for doing exactly what you want on a linux server:
This project provides API's in C/C++/Python for testing and profiling simple (single process) programs in a restricted environment, or sandbox. Runtime behaviours of binary executable programs can be captured and blocked according to configurable / programmable policies.
The sandbox libraries were originally designed and utilized as the core security module of a full-fledged online judge system for ACM/ICPC training. They have since then evolved into a general-purpose tool for binary program testing, profiling, and security restriction. The sandbox libraries are currently maintained by the OpenJudge Alliance (http://openjudge.net/) as a standalone, open-source project to facilitate various assignment grading solutions for IT/CS education.
If this is a tutorial service, so the clients just need to test miscellaneous assembly code and do not need to perform operations outside of their program (such as reading or modifying the file system), then another option is to permit only a selected subset of instructions. In particular, do not allow any instructions that can make system calls, and allow only limited control-transfer instructions (e.g., no returns, branches only to labels defined within the user’s code, and so on). You might also provide some limited ways to return output, such as a library call that prints whatever value is in a particular register. Do not allow data declarations in the text (code) section, since arbitrary machine code could be entered as numerical data definitions.
Although I wrote “another option,” this should be in addition to the others that other respondents have suggested, such as sandboxing.
This method is error prone and, if used, should be carefully and thoroughly designed. For example, some assemblers permit multiple instructions on one line. So merely ensuring that the text in the first instruction field of a line was acceptable would miss the remaining instructions on the line.
Compiling and running someone else's arbitrary code on your server is exactly that, arbitrary code execution. Arbitrary code execution is the holy grail of every malicious hacker's quest. Someone could probably use this question to find your service and exploit it this second. Stop running the service immediately. If you wish to continue running this service, you should compile and run the program within a sandbox. However, until this is implemented, you should suspend the service.
You should run the code in a virtual machine sandbox because if the code is malicious, the sandbox will prevent the code from damaging your actual OS. Some Virtual Machines include VirtualBox and Xen. You could also perform some sort of signature detection on the code to search for known malicious functionality, though any form of signature detection can be beaten.
This is a link to VirtualBox's homepage: https://www.virtualbox.org/
This is a link to Xen: http://xen.org/

Windows BackupRead / BackupWrite and ACLs

I have been trying to understand what should be the right way in using BackupRead and BackupWrite for backing up data on a computer and especially about restoring it reliably.
Now I understand how to use the API and have been successful. However there's one thing that bothers me.
You can backup, beside the file content itself, any alternate data streams also the security information (ACLs).
Now if I would store the ACL data for backup and then later, once the data needs to be restored on a different machine OR a newly setup machine what should I do with the SIDs which are related to the ACL?
The SID is most likely no longer valid for the machine and how should the right user be selected?
Now I am looking at this on a bigger scale let's say this is a computer with multiple users and hundreds or thousands of objects with different settings this would be mess to get the data restored with the security settings applied to them again.
Is this something, if the user of the software wishes to backup the security settings, what the user has to take about himself and update them accordingly or what?
Additionally BackupRead and BackupWrite will give me the raw binary data of those items which is not all too hard to use however obviously this API does not even intend to face this issue.
Anyone has an idea how a backup application should handle this situation? What is your thought, or any pointers on guidelines for this specific topic?
Thanks a lot.
I think you understand correctly the problems with backup and restore of data. I think that correct understanding of problems is a half of its solving. I suppose that you are, like the most of users of the stackoverflow site, mostly software developer and not an administrator of a large network. So you see on the problem from another side of software developer and not from the side of the administrator. An administrator knows the restrictions of backup and restore of ACLs and already use it.
In general you should understand that the main purpose of backups to save the data and to restore the data later on the same computer or server. Another standard case is: one restore backup from one server to another server after the changing of hardware. In the case the old server will no more exist. Mostly one makes backups of servers and organize to work on the clients so, that no important data will be saved of the client computer.
In the most cases the backed up data has Domain Groups SIDs, Domain Users SIDs, well-known SIDs or SID aliases from the BUILTIN domain in the security descriptors. In the case one need make no changes of SIDs at all. If the administrator do will make some changes in ACL he can use different existing utilities like SubInACL.exe.
If you write Backup/Restore software which you want use for moving the data with the security information you can include in the backup some additional meta-information about the local SIDs of accounts/groups used in the saved security descriptors. In the Restore software you can provide the possibilities to replace SIDs from the saved security descriptors. Many year ago I wrote for one large customer some utilities to clear up the SIDs in SD in the file system, registry and services after domain migration. It was not so complex. So I suggest that you could implement the same feature in you Backup/restore software.
I do believe the Backup* APIs are primarily intended to backup and restore on the same machine, which would render the SID problem irrelevant. However, assuming a scenario where you need to restore a backup on a new install, here's my thoughts on solutions.
For well-known SIDs such as Everyone, Creator Owner and so on, there isn't really any problem.
For domain dependent SIDs you can store them as is, and upon restore you could fixup the domain part, if needed. Likely you should store the domain name as well for such SIDs.
For local users and groups, you should at least store the user/group name for each SID. Fixup on restore could be partially automatic based on these names, or manual (assuming an user interface for the application) where you ask the user whether he wishes to map this user to a new local user, convert these SIDs to a well-known SID, or keep as is.
Most of the issues related to such SIDs can (and probably typically will) be possible to handle automatically. I'd certainly appreciate a backup application that was smart enough to do the restore I asked it to and figure out that "Erik" on the old machine must be "Erik" on the new machine as well.
And a side note, if you do decide to go with such a solution, remember how annoying it is to start an overnight data transfer just to get back to something 5% done blocking on a popup it could just as easily defer :)

What are the main reasons against the Windows Registry?

If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Which features would be absolutely necessary?
What are the main concerns (security, ease-of-configuration, ...)?
I think the Windows Registry was not a bad idea, just the implementation didn't fullfill the promises. A common place for configurations including for example apache config, database config or mail server config wouldn't be a bad idea and might improve maintainability, especially if it has options for (protected) remote access.
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
A kernel-based registry? Why? Why? A thousand times, why? Might as well ask for a kernel-based musical postcard or inetd for all the point it is putting it in there. If it doesn't need to be in the kernel, it shouldn't be in. There are many other ways to implement a privileged process that don't require deep hackery like that...
If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Make sure that applications can change many entries at once in an atomic fashion.
Make sure that there are simple command-line tools to manipulate it.
Make sure that no critical part of the system needs it, so that it's always possible to boot to a point where you can fix things.
Make sure that backup programs back it up correctly!
Don't let chunks of executable data be stored in your registry.
If you must have a single repository, at least use a proper database so you have tools to restore, backup, recover it etc and you can interact with it without having a new set of custom APIs
the first one that come to my mind is somehow you need to avoid orphan registry entries. At the moment when you delete program you are also deleting the configuration files which are under some directory but after having a registry system you need to make sure when a program is deleted its configuration in registry should be deleted as well.
IMHO, the main problems with the windows registry are:
Binary format. This loses you the availability of a huge variety of very useful tools. In a binary format, tools like diff, search, version control etc. have to be specially implemented, rather than use the best of breed which are capable of operating on the common substrate of text. Text also offers the advantage of trivially embedded documentation / comments (also greppable), and easy programatic creation and parsing by external tools. It's also more flexible - sometimes configuration is better expressed with a full turing complete language than trying to shoehorn it into a structure of keys and subkeys.
Monolithic. It's a big advantage to have everything for application X contained in one place. Move to a new computer and want to keep your settings for it? Just copy the file. While this is theoretically possible with the registry, so long as everything is under a single key, in practice it's a non-starter. Settings tend to be diffused in various places, and it is generally difficult to find where. This is usually given as a strength of the registry, but "everything in one place" generally devolves to "Everything put somewhere in one huge place".
Too broad. Its easy to think of it as just a place for user settings, but in fact the registry becomes a dumping ground for everything. 90% of what's there is not designed for users to read or modify, but is in fact a database of the serialised form of various structures used by programs that want to persist information. This includes things like the entire COM registration system, installed apps, etc. Now this is stuff that needs to be stored, but the fact that its mixed in with things like user-configurable settings and stuff you might want to read dramatically lowers its value.

bitrock installBuilder issues

I have recently been tasked with finding a suitable installShield replacement and I am leaning towards InstallBuilder over Install4J and InstallAnywhere. Has anyone come across any issues with creating installers that installBuilder has been unable to handle? For example very strict security on the client machine.
*Comment added for additional clarity
For instance a system that has all accounts disabled sans the admin account with a very unique domain policy for instance, the inability to write files to the temp directory. Also how extensible is your product, from playing around with it I notice it is purely xml so is there anyway to write some extensions to the core?
this is Daniel from BitRock. Our installers do not need admin privileges in any platform (unless you explicitly require them) and can install as regular users. If you need to check permissions in the filesystem, registry, etc. from within the installer to see what is available, there is code to do that as well. I am not sure if the above answered your question. Can you provide more details about what you mean with restricted security in the client side? We take great pride in our level of support, and we encourage you to contact our support team with any questions or suggestions you may have, just to see by yourself.
You should also take a look at InstallJammer just for comparison. It's a lot more open than most of the ones you mention and gives you the ability do practically anything from within your installer.

network drive file sharing

For the better part of 10 years + we have relied on various network mapped drives to allow file sharing. One drive letter for sharing files between teams, a seperate file share for the entire organization, a third for personal use etc. I would like to move away from this and am trying to decide if an ECM/Sharepoint type solution, or home grown app, is worth the cost and the way to go? Or if we should simply remain relying on login scripts/mapped drives for file sharing due to its relative simplicity? Does anyone have any exeperience within their own organization or thoughts on this?
Thanks.
SharePoint is very good at document sharing.
Documents generally follow a process for approval, have permissions, live in clusters... and these things lend themselves well to SharePoints document libraries.
However there are somethings that don't lend themselves well to living inside SharePoint... do you have a virtual hard drive (.vhd) file that you want to share with a workmate? Not such a good idea to try and put a 20GB file into SharePoint.
SharePoint can handle large files, and so can SQL Server behind it... but do you want your SQL Server bandwidth being saturated by such large files? Do you want your backup of SQL Server to hold copies of such large files multiple times?
I believe that there are a few Microsoft partners who offer the ability to disassociate file blobs from the SharePoint database, so that SharePoint can hold the metadata and a file system holds the actual files, and SharePoint simply becomes the gateway to manage access, permissions, and offer a centralised interface to files throughout an organisation. This would offer you the best of both worlds.
Right now though, I consider SharePoint ideal for documents, and I keep large files (that are not document centric) on Windows file shares.
Definetely, use a tool.
The main benefit here is version control. Being able to jump easily to a previous version, diff'ing and seeing who modified what (see most VCS' blame/annotate tool- it prints out a text file showing when/who modified each line in the text file).
Second, you can probably benefit from issue tracking/task tracking.
Other benefits include web access from the internet, having a wiki (which can be great in some situations), etc.
I use Subversion + Redmine at work, and I find it highly useful- test a few solutions and you will surely find out further advantages for you.
One thing that can be overlooked in the change to an document management tool is the planning required around how much is going to be stored and information architecture issues like where different content is going to end up.
SharePoint particularly is easy to setup without a good plan going forward and is particularly vulnerable to difficulties later on when things get to busy.
I would not recommend a home grown app for something like this. The problem has been solved by off the shelf tools and growing one from scratch is going to cost a huge amount and not get you any way near the features for the money.
Did I mention how important planning your security groups and document areas (IA) was?
If you need just document storage then sharepoint can do very well. WSS is ewen free and it provides very good document storage capabilities.
But you have to plan carefully as updating existing applications is painfull. If you decide to go with Sharepoint then I can give you few advices from top of my head
Pay attention to security configuration (user groups, privilegies,..)
Plan your document libraries well as it is not easy to just move documents betveen them
Also consider limiting number of versions that one document can have, because sharepoint stores full backups betveen verions, not just changes
Don't use infopath:) we have very bad experience with it (just don't tell this to managers)
If you don't really need to change graphical look of Sharepoint than don't bother with it as it brings many problems (I'm talking about custom masterpages and custom site templates)
Try to use as much OOB stuff as possible, because developing your own webparts not only cost more, but it can be quite complicated.
Make sure to turn-on search indexing. This is quite tricky, because it is by default turned off and then you will be as surprised that search is not working as I was :)
If you try to just deploy it and load 10.000 documents into it then you will surely have problems with it later. If you give a little thought about structure then you will end up with really good document storage.
Migrating is very probably worth the cost in the long term. You will gain reliability, versioning, traceability, and extensibility.
Be sure to first identify the groups/rights, and to identify which links need to be fixed (maybe you have applications that use links to the shares).
An open source alternative to SharePoint is Alfresco, it is very good for CIFS (Windows shares) too.

Resources