This will be pure speculation on my part now.
As I understand it, technically graphics cards rarely run at their full potential as it is.
Benchmarks show there is little difference between x16 and x8. Extrapolating those results and referencing them with
PCI-E standards I imagine there would be little difference between PCI-E 3.0 x16 and PCI-E 2.0 x16 because of the theoretical maximum amount of data which can be transported with PCI-E 2.0 x16 being very similar to PCI-E 3.0 x8.
I would imagine that a graphics card has a minimum threshold for performance which a particular slot may theoretically provide. Likelihood is a modern graphics card would be bottlenecked by a x1 slot with an adapter simply because the slot wasn't designed to have high bandwidth.
As for the adapter, I think I see where you're going with it but I would imagine it would be difficult. Correct me if I'm wrong, but this is how I'm reading your question. The graphics card at one end can physically fit into the x1 slot (with the rest of the contacts hanging out). The adapter fits into the x1 slot. There's a portion of the graphics card's contacts which are redundant.
I would think the adapter provides physical support; it also provides the correct contacts to the x1 slot because we don't know which contacts are required for data transfer.
Additionally,
this thread may be useful for what you have in mind.