Hopefully vmug does not use same system as the regular licensing, they are not issueing new licenses or quoting prices atm.
Hopefully vmug does not use same system as the regular licensing, they are not issueing new licenses or quoting prices atm.
8 lanes just refers that it has 8 lanes directly on card.
That you can connect 2 cables breaking out to 8 ports without the need for an expander.
The expander works similar to a network switch, the cables back to p408i-a is its uplink.
With 2 cables it aggregates the 8lanes to 8x 12gbit and those 96gbit is shared by the devices on the expander.
In the typical dl380 the expander would have 2 cables to each of the 3 front cages + have 2 spare ports for any drives you have in rear of server.
With a maxed out 30x 2.5" front+back server they share the 8 lanes to server.
You can even add another expander to the expander if you want, the card supports 200+ drives if you keep throwing expanders at it.
Would not expect it to just ping something like google.com, maybe use their dns tho.
The ones ive dealt with have a rotation of urls it tries to load.
Bonus joy is when you use the TV longer than they expect and some of them stop responding, so even with no firewall/vlan etc segmentation it still needs this spoofing done.
I hope you got it for free and did not pay for that thing.
But as for usecase you might want to add what gen box it is and what controller/expander it has installed in the rear.
atm im not running much since not much time to lab, just the base stack.
about 35€/mo for 650w average consumption.
For my own lab, ebay.
As a small company reselling hardware, ebay + directly from ebay sellers off ebay.
PSU have an efficiency curve, you would gain more efficiently with a load.
This part is often overlooked for sure.
If planing on running it for a while picking up a pair of lower wattage psus will often recoup its cost + help reduce noise.
Tailscale negotiates a direct VPN to the VPS and all traffic going through the VPN goes through the VPS.
As for bandwidth its not really that expensive unless you need like 30-50tb per month type numbers.
If its specific machines you can install tailscale on those also and they make a direct connection.
I got on my phone,laptop,tablet etc so wherever i am it will use tailscale as middleman to find open ports and establish a vpn to home network.
Tailscale and a cheap VPS running the exit node tends to be a common route.
Lets you expose services out without opening anything localy and gets you full control out from the VPS without ISP meddeling.
There are other alternatives but tailscale has the best free tier with upto 100 devices, exit node+router, solid access control and mfa.
Everytime i see that card i go “who on earth is this for, who would want this”.
Beyond being behind a plx switch for added latency there is not actualy enough bandwidth to support using all 3 features.
The plus side of selling to graphics market i guess, they are "non-tech tech’.
I doubt there is much of a market for x1 sfp+ cards tho.
Id expect most do same as i do now, just stick a x4/x8 card in the x1 slot.
It tends to go hand in hand with a nas with built in 10gbase-t switch.
Sfp+ tends to have a latency pitch in this segment, that ship has sailed with the nic behind a switch.
The IOM6 units are just dumb expanders, they do not support zoning etc splitting of the shelf.
They just give direct access to all drives and thats it.
On paper you can split it by just connecting both and making sure you dont use the same drive for both servers.
(Would expect you to need the interposers if this is sata drives)
Have not done this myself but ive seen it done in multiple setups.
The cleaner approach imo would be a virtual truenas etc file server that runs on the host needing lowest latency and shares to the rest.
The firebox m500 is solid to use for pfsense/opnsense.
Tripp-lite pdus are very nice if you they have the management card in them.
The 3560 i would not use due to consumption/age.
How small do you need to start?
And you need to define “FAST” with an actual number you need.
Power disable enables the host to power cycle the drive without power cycling itself or you pulling and re-seating the drive.
The downside is that when plugged into “legacy hardware” that does not support this feature you cant give it 3.3v.
Thats why you often see people taping a few pins on the drives power connector or using molex->sata adapter cables that does not power these pins.
Since you often get ultrastar enterprise drives with this feature when you shuck.
These days a barebones r730 or dl380 gen9 will often be cheaper than a generic chinese x99 board. (Barebone/CTO is generaly only missing cpu/ram/storage)
Uses the same typical e5-2600v4/ddr4 with sas hba+nic usualy included since they are specific to server.
And you then get psus,case,heatsinks,fans etc also included.
Buying a board and building is the expensive route, its usualy not taken unless you got nowhere to stash the noisy rack option.
As a fellow Norwegian ive always just vented the air into the living area (with a sound trap after fan).
A few places ive lived this has been all my heating all year.
That sounds like a 12-15$ TV box.
Their downside is IO on network/storage, adding the cost of that and you start passing thin clients/minis in actual cost.
But with the arm option you are close to same consumption for cores that are 10% of the performance that cheap x86 would be.
You can put a tape drive on the last port like you describe.
Nothing on the expander will suffer unless the full 96gbit link to raid card is saturated and it has to start throtteling stuff.
if you have something 4-8x cache SSDs with high load they would normally go in one cage and that cage gets its own controller.
To not risk saturating the link from expander and impacting performance + avoid the tiny latency expander adds.