Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
i work on windows 10.
i have made a google cloud linux compute engine with 230gb standard persistent disk,1 GPU(tesla K80) ,13gb memory,2vCPU.
i have installed jupyter notebook and and all deeplearning frameworks and i am able to use it perfectly.
but i dont know how to access the data for deeplearning that is in my computer on the jupyter notebook running on my compute engine instance.
can anybody tell me how to use boot disk and what exactly its use is?
how to access data from my laptop?
I looked into the following links but couldnt understand the terminology.
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
https://cloud.google.com/compute/docs/disks/mount-ram-disks
To clarify the terminology:
Persistent disk: it is the same way you add a hard disk to your machine. If you add one more, you have to mount it somewhere inside your filesystem. (e.g. /media/data) You can find about making directory and mounting command on you mentioned documentation (down from 5.)
Ram disk: it will treat extra disk as a memory space (e.g. for high performance computing). This is not consider as storage and will be count as tmpfs that doesn't keep data permanently. You may use if your task requires greater amount of RAM.
(disclaimer: I never use both extra persistent storage.)
In case you cannot find your data in Jupyter, it depends on the location you start jupyter notebook. For example, if you start Jupyter notebook at home directory, you will see data only in home directory. If you have a mounted drive, one way to access to that mount is making softlink to your working directory.
P.s. you can also use software like WinSCP to access to all file system apart from using only Jupyter.
Make sure to set an ingress firewall rule to allow traffic to the GCE instance.
In the console, go to:
networking
VPC network
External IPs
Reserve a static IP address.
Then go to:
VPC network
firewall rules
Create a tag allowing the protocol tcp:9999 from source IP 0.0.0.0/0.
When you create your instance, associate it with both the IP address and the firewall rule.
Here you can find more detailed instructions on how to create firewall rules on a GCP project: https://cloud.google.com/vpc/docs/using-firewalls
Related
I am searching for an answer on how to create and pass through a raw device to a VM using proxmox. Through that I am hoping to have full control of the disk including S.M.A.R.T. stats and disk spindown.
Currently I am using passthrough using the SATA passthrough offered by proxmox.
Unfortunately I have no clue how to create a raw disk file from my (empty) disk). Furthermore I am not entirely certain on how to bind it to the VM.
I hope someone knows the relevant steps.
Side notes:
This question is just a measure I want to try out to achieve a certain goal. For the sake of simplicity I posed my question confined to the part above. However, if you have a better idea, feel free to give me a hint. So far I have tried a lot of things to achieve my ultimate goal.
Goal that I want to achieve:
I am using Proxmox VE 5.3-8 on a HP Proliant Gen 8 server. It hosts several VMs among which OMV should serve as a NAS. Since the files will not be accessed too often, I opt for a spindown of the drives.
My goal is reduction of noise and power savings.
Current status:
I passed through two disks by adding them to
/etc/pve/nodes/pve/qemu-server/vmid.conf
sata1: /dev/disk/by-id/{disk-id}
Through that I do see SMART stats and everything except disk spindown works fine. Using virtio instead of SATA does not give me SMART values.
using hdparm -y to put a drive to sleep does not work inside the VM. Doing the same on the proxmox console result in a sleep, but it wakes up a few seconds later.
Passing through the entire HBA is currently not an option.
I read in a forum that first installing Debian and then manually installing the proxmox packages resulted in a success. However that was still for Debian jessie and three years ago.
Install Proxmox VE on Debian Stretch
Before I try this as a last resort, I want to make sure if passing the disk through as a raw file will lead to the result.
Maybe someone has an idea on how to achieve my ultimate goal.
I do not have a clear answer to your question, as per "passing through" the disk, but i recently found a good enough solution for my use case.
I have an HDD that i planned to use as a backup dir for VMs, but i also wanted to put any kind of data on it, and share that disk with any VM that would like to.
The solution i found is to format the disk using ZFS, then creating mount points for different usage (vzdump backup, shared nas folder accross VMs + ISO mounting point etc...). I followed this guide: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375
I ended up installing samba on proxmox host itself, with a config to share some folder/mount point of the disk, via SMB. Now the device appears as a normal disk over the network, with excellent read/write speed as everything is local.
Sorry that this post does not "answer" your question (no SMART data or things low level like that :'( ) BUT shared storage ^^'
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We have about 300 sites, but the combined IIS root content is about 1TB. We'd like to use route53 failover for load balancers in two AZ's in the same region, and have the IIS web heads come up and down in an autoscaling group as needed.
1TB is a little much to attach to each autoscaling instance especially when traffic starts bringing up several instances in each AZ.
We are using a seperate pair of DFS boxes at the moment to achieve this, but I really feel like there's a better/higher performance way to achieve this.
What should we use to provide the fastest and most reliable shared storage to our IIS autoscaled nodes that can be replicated accross AZ's if needed?
Thanks
Try using CloudFront, it will eventually distribute your static content in all AWS regions (or not, you can tune this).
Reducing the load at your servers and lowering the response latency. Using this service you will save your servers resources and have time to migrate the static content into S3.
In addition, CloudFront setup is very straightforward.
In the other hand, if you are willing to persist on trying to use shared storage, EBS (Elastic Block Storage) cannot be mounted in more than one instance at the same time, then you cannot use it, but you still have at least 2 alternatives:
Create a new instance to be the fileserver, in this case you can try FreeNas or other equivalent solution, or even another windows server.
You might try to use a driver to mount an S3 bucket as a share using TNTDrive or WinS3Fs.
What about storing your files in S3. http://aws.amazon.com/s3/
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose I were to set up an ubuntu machine and install some services and software on it. Further suppose I were to set up another stock ubuntu machine, this time without the additional services and software. I know there are ways of creating installation/setup scripts or taking disk images and such to build large numbers of identical machines, but if I were to programmatically take a file-based diff between the installations and migrate all file additions/changes/removals/etc from the fully configured system to the stock system, would I then have two identical, working systems (i.e. a full realization of the 'everything is a file' linux philosophy), or would the newly configured system be left in an inconsistent state because simply transferring files isn't enough? I am excluding hostname references and such in my definitions of identical and inconsistent.
I ask this because I need to create a virtual machine, install a bunch of software, and add a bunch of content to tools like redmine, and in the near future I'm going to have to mirror that onto another vm. I cannot simply take a disk image because the source I receive the second vm from does not give me that sort of access and the vm will have different specs. I also cannot go with an installation script based approach at this point because that will require a lot of overhead, will not account for the added user content, and I won't know everything that is going to be needed on the first vm until it our environment is stable. The approach I asked about above seems to be a roundabout but reasonable way to get things done so long as it its assumptions are theoretically accurate.
Thanks.
Assuming that the two systems are largely identical in terms of hardware (that is, same network cards, video cards, etc), simply copying the files from system A to system V is generally entirely sufficient. In fact, at my workplace we have used exactly this process as a "poor man's P2V" mechanism in a number of successful cases.
If the two systems have different hardware configurations, you may need to make appropriate adjustments on the target system to take this into account.
UUID Mounts
If you have UUID based mounts -- e.g., your /etc/fstab looks like this...
UUID=03b4f2f3-aa5a-4e16-9f1c-57820c2d7a72 /boot ext4 defaults 1 2
...then you will probably need to adjust those identifiers. A good solution is to use label based mounts instead (and set up the appropriate labels, of course).
Network cards
Some distributions record the MAC address of your network card as part of the network configuration and will refuse to configure your NIC if the MAC address is different. Under RHEL-derivatives, simply removing the MAC address from the configuration will take care of this. I don't think this will be an issue under Ubuntu.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I was using the free evaluation of Windows Azure Virtual Machines to see if my company could switch from our local cloud service providers to a Windows Azure. Today I've exceeded the usage limit (forgot to stop/delete one spurious VM created for testing only). The main VM was disconnected and its disk stored in storage account (which didn't exceeded the limit, as this is, after all, a test and usage is very low). Since I need the disk, and most importantly, the data inside it as it has performance information and even some production data that should never have ended up on that test VM, but... anyway, I'm not being able to:
1) Reuse the old DNS. Even the VM being removed, when I try to create a new one and select the old DNS it tells me the old DNS is already in use (xpihomo.cloudapp.net)
2) I can't create a virtual machine, it keeps getting me errors like this: "The server encountered an internal error. Please retry the request. The long running operation tracking ID was: 9439ecfa765c4d6d94bd9238a6e579a1."
UPDATE 2: I've deleted the Cloud Services, now I can reuse the DNS, but I still can't recreate the VM. It now gives this error: A lease conflict occurred with the blob https://portalvhds4bqgghbw63gp9.blob.core.windows.net/vhds/lucasoft-eval-eval-vm-2012-10-25.vhd. The long running operation tracking ID was: 97c64baeebce4c2d8e4a3c9661ef0130.
I'm also very disappointed that I only have a forum to ask for guidance and support. And can't imagine how I would proceed if this issue involved sensitive information? Post it on a forum for everyone to see?
And even worse: the forum software insists on using pt-br as its locale (I'm from Brazil) and in the pt-br forums there is no Virtual Machine Preview dedicated forum! I've posted this same message there but I'm not hoping to get any answers.
Does anyone know what may be going on that is preventing the VM creation?
And, as an extra question, does anyone know if Amazon services suffer from the same (lack of) support issues?
UPDATE: I've found an answer for (1):
http://social.msdn.microsoft.com/Forums/pl-PL/WAVirtualMachinesforWindows/thread/ba1c873d-d86f-48aa-8ab2-98a058b7fdcd
When the VM instance is deleted the Cloud Service used to run it is not deleted automatically, and it keeps a hold on the DNS. If you want to reuse the same DNS you'll have to delete the cloud service entry... But would it delete the data from my VM? I don't think so, as it is stored in a separate VHD, but I just can't take the risk right now of deleting it...
I was able to resolve this issue by a very dirty, dirty route. Since it seemed that a lease conflict was preventing the VM from being mounted (certainly a bug in Azure's VM implementation) I've resorted to downloading a tool to manage the storage account and then I've copied the pertinent disk to another image and it worked. Here are the steps taken:
Downloaded CloudXplorer evaluation (I'll certainly buy a license, because they've saved my skin) http://clumsyleaf.com/products/cloudxplorer
Connected to my storage account (using the keys informed in the web
portal manager)
Browsed to my storage and to the vhds container (the two VM disks
were there, the one that worked and the one that didn't)
created a new folder inside it named it 'Teste'
copied the pertinent disk
pasted the copied disk to the Teste folder
Renamed the disk on the 'Teste' folder to something I want
Cut the renamed disk from the 'Teste' folder
Pasted the renamed disk back into the vhds 'root'
Logged into the cloud management and went to Virtual Machines > Disks
Used (+) CREATE DISK and choose the renamed disk above in the popup
dialog
Make sure to check "OS Disk" option and Windows.
Created a new VM from the gallery, choosing my disk and using the
above disk
The VM was created succesfully
Needless to say that this was a bug in Microsoft Azure: they've kept a lease to a disk that was not used anymore.
The workaround works, but it certainly is a stain on the name of Windows Azure and its reliability and support team.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We have setup 3 Virtual Machine server machines that mount the VMs from 2 other storage machines. We mount the VMs from the storage servers to have less data to move when moving the VMs(pause on one server, mount on new server, unpause) and to facilitate snapshots and backup.
We were in the middle of an extended power outage due to storms (the ops team forgot to check that we had fuel in the generator and the don't test it weekly tsk, tsk), so we shut everything down.
After fueling the generator, we started to bring everything up. Big problem.
To NFS mount the storage, NFS wants to do a reverse DNS lookup, but the DNS server is a VM that can't start until the storage is NFS mounted!
We copied the DNS server VM to one of the VM servers locally and started it so we could then bring everything up.
We would like to run NFS without the reverse lookup (everything is on our internal network) but can't find out how to turn off.
Any help is appreciated
Put the IP address of the NFS clients in the /etc/hosts file of the NFS server with a comment like:
# 2009-04-17 Workaround a chicken and egg DNS resolution problem at boot
192.0.2.1 mynfsclient
192.0.2.2 anothernfsclient
Then, add to your runbook "When changing the IP addresses of a machine, do not forget to update the hosts file of the NFS server".
Now, to shut off this stupid DNS test in the NFS server, it depends on the server. You apparently did not indicate the OS or the server type.
I had a similar problem with an old Yellow Machine NAS box - I was having DNS/DHCP fights where the reverse lookups were not matching the forward lookups.
In our case, just putting dummy entries in the NAS box /etc/hosts for all the IPs solved the problem. I don't even need to have correct names for the IPs - just any name for an IP solved stopped mountd from complaining.
(Interesting side note - at least in the older version of Linux on the NAS box, there's a typo in the NFS error message: "DNS forward lookup does't match with reverse " )
Can't you just put the ip address of the server in question in the fstab file and no dns lookup will be required.
It's NFS v4, the problem is that all the requests for access use a reverse DNS lookup to determine the NFS domain for access/security purposes.
I think you can stop this behavior by putting a line in /etc/default/nfs containing:
NFSMAPID_DOMAIN=jrandom.dns.domain.com
This needs to match across all the systems that are sharing/using NFS from each other. See the section about Setting NFSMAPID_DOMAIN, which is to the end of the page which explains what happens when it's not set.
NFSv4 - more fun than a bag of weasels.