_index

These are running notes from the X Developer's Conference in San Jose, February 7 through 9, 2007. Please add more content, and reformat to look prettier.

Wednesday, February 7

(intro speech)

What are we doing this year? By the end of the week, we should know and communicate this.

Peter Hutterer: MPX

Slides: PDF slides

Basic problem: Only one focus for keyboard and mouse input

All the existing multi-user toolkits don't work. You have to write to them, doesn't work for arbitrary apps. So, hey, let's give X multiple pointers!

Basically all functionality works. Each pointer acts like a core pointer, like an XI pointer, can have different shapes, can be queried, can be warped, each keyboard can have different focus, and pointers and keyboards can be dynamically associated.

So what changed?

Event delivery is modified so that every device has its own sprite structure, instead of just one like we have now. Therefore, on event dequeue, we know which device generated the event.

Cursor rendering goes entirely through software now, since basically no hardware has more than one cursor in hardware. This had to be extended to handle the cases where the backing tiles for each cursor overlap.

In standard X, you get one shape per window, and shapes can be inherited. In MPX, each device can have one shape per window, and the inheritance works the way you expect.

QueryDevicePointer WarpDevicePointer DefineDeviceCursor ChangePointerKeyboardPairing Device{Enter,Leave}Notify PairingChangedNotify

~30 calls in the core protocol no longer have a defined state. Possible solution: "SetPointerBehaviour" for FollowSingle, DevicePointer, etc., which the window manager would enforce for naive apps. Lots of race conditions until this happens when multiple users are interacting with the same window or widget.

Really need window manager support for this to work. There is a demo wm that works, blackbox kinda works, metacity completely doesn't.

Also, applications need to become aware of this too. Pointers can pop in and out of existance now.

Things to think of:

It's not ready yet.

Questions:

(Intermission for lunch orders)

Philip Langdale: Virtual Multihead in VMware

Host support

Single head: plain fullscreen

Multihead: Old school manual window resizing

Guest support, since this was pre-RANDR 1.2: yet another pseudo-xinerama. Additional call to the vmware extension to send a new xinerama config.

Needs RANDR 1.2 integration. EWMH needs extending to cover maximization across multiple screens.

(shiny demo)

(another intermission)

Keith Packard: RANDR 1.2

Things we had tried before: Xinerama, xf86vidmode, RandR Classic

Core X does not support multiple screens well. Number of screens is fixed, size of each screen is fixed, monitors probed at startup. This info is passed over to Xlib at app startup, and is really hard to fix. The server also makes this fragile internally, but that's fixable. But many resources are per-screen, so it just doesn't work the way you want.

Xinerama. Merges many monitors into one screen. Allows apps to move across screens, which is cool! Screen config was fixed at startup, so suitable for fixed multi-head environment. (Initial implementation also happened to be wildly inefficient.)

Changing modes was xf86vidmode. Changes monitor mode on the fly, but not the screen size. Whee, pan and scan. Also exposes gamma correction. But screen size is still fixed at startup, and set of modes fixed at startup.

Changing screen size was RandR. Runtime changes to screen size, but still fixed set of sizes and monitor modes, and mode expressed as size and refresh only.

Randr 1.0 done for kdrive for rotation. When added to xfree86, was done without changing drivers, so no mode reprobing, no rotation...

RandR 1.2 fully expresses hardware capabilities. All configuration can be changed. Unified config file structure, reduced driver-specific code, unifies the semantics of the above extensions.

The three objects: Screen, CRTC, Output. (Pretty picture). One screen, N CRTCs, connected to M outputs. (Shiny demo)

What else can it do? LUT for gamma adjustment. Arbitrary output properties. User defined modes. New driver-independent API.

Minor driver problems. XAA is kind of gross, DRI is fixed.

Protocol is finished, DIX implementation is finished, intel driver working, radeon and nouveau nearly working, gtk-based UI demo. Need to fix remaining drivers and finish rotation/reflection work.

ajax: Xorg releases and future planning

The current release is almost done, and is blocked on the documentation release. That's mostly a build system issue, and we should work on making the docs build something that is less of a disaster for future releases.

However, the release process is far too heavyweight, and we have pieces of our process which are not adding value and burning out release managers. Prevailing opinion has been that this is probably a job for more than one person, but we might be able to get it down to something manageable by a single person sustainably. We've also been doing a better job of documenting our processes (MakingReleases), so that the release process is transferable between people.

Currently, the release manager is rolling releases for everything that has been touched but not released as part of releasing the katamari. This was not the initial plan for modularization, which involved getting individual maintainers per module. There are several solutions to this burden on the release wrangler

The badged tarball idea needs to be abandoned. The badged tarballs don't match the unbadged versions, and don't even distcheck. Instead, our only badging process that's recorded outside of the release announcement would be a git tag for the katamari in each module (which is somewhat failure-prone, but can be remedied if mistakes are found).

As far as abandoning modules, abandoning apps katamari entirely was discussed but thrown out.

The X Server release should be decoupled from the katamari release. So the upcoming server release planned (~2 week timeframe) will be 1.3 instead of 1.2.1.

We should do a better job of maintaining stable branches. This is the responsibility of OSVs and others maintaining stable releases. However, if we accomplish our goal of faster releases, it may be less interesting.

The planned features for 1.3:

The planned features for 7.3 katamari (1.4+ xserver, ~May 2007)

Desired goals for 7.4 katamari (1.5+ xserver, ~Nov 2007)

ABI/API compatibility was discussed. One proposal was the kernel model, of just releasing the server when server people want to, and if that breaks things then people get to recover. keithp's proposal is maintaining API compatibility between last stable releases and master, but not ABI. This is being experimented with in the intel driver, and appears to be promising, allow more change, but still maintain what OSVs really want (recompiling lets you take new driver and run on old X Server, or take new X Server and run with old driver code)

Eric Anholt is signed up for the 7.3 release management. Badged tarballs will not be done. Rolling releases for minor fixes will be up to people interested in seeing those fixes go out -- only the X Server and other criticial releases will be rolled by Eric.

(keith doing nerdcore about randr1.2 api)

Thursday, February 8

Board Q/A session

Observations: lack of structured talk scheduling doesn't seem to reduce the amount of quality technical discussion. Flipside is that the small sessions need to be responsible for reporting their results.

Do we need to go as far as professional facilitators? Maybe not. But hey, look, office supplies! (Pass stuff out.) Let's try to write stuff down.

Do we want to try changing the format to a hothouse / retreat? Yeah, maybe. One thing lacking this year was that, last year, almost everyone was at the same hotel, which encouraged after-hours discussion and work. Suggestion for next year is to either reserve hotel earlier so we can get block allocation, or try the retreat format.

It's been suggested that the board should facilitate small group meetings. So, yeah, if you have a proposal for this kind of thing, please send it to the board so they can get that started. All reasonable proposals will be entertained. Local groups, small interest stuff, etc.

Question from a non-member: I'm not one, should I be? It lets you vote on the board, host events, represent the organization at conferences and trade shows, etc. It's free, there's really no downside, and it announces your participation in the X development community. Do it!

Suggestion: Is it possible to do something like Summer of Code directly from X? Sure. Board hasn't finalized anything yet, but could be doable. The important thing about SoC is that it's not about producing code, it's about teaching students.

Joe Miseli: Display technology, VESA, and EDID

PDF slides

EDID is the mechanism by which monitors describe themselves. It's now technically called E-EDID since it's been extended, but it's still the same thing. EDID is now at version 1.4.

Review of various display technologies in use today: CRT, LCD, plasma, projector, etc. Most fixed-pixel array displays have scalers. Newer and higher resolution displays have no pure scalers, so EDID becomes critical for driving them correctly.

Review of display interfaces: VGA, DVI-{D,I}, HDMI, DisplayPort, UDI...

A display may have many timings, CRTs for example are basically infinitely adaptable within their sync range. Some may have a few timings, or even only one. All cases are handled by EDID. The purpose of EDID is to make sure that something comes up at power on.

Signalling: Video content, blanking, sync, I2C digital signals for communication. Connector may carry other pins too (USB, audio, etc). (Timing math.) Timing specs include DMT, GTF, CVT. DMT was pages of explicit timings, GTF and CVT are formulas.

Sync. Three main times: separate, where H and V can be positive or negative on independent pins; composite, where they're combined into one phase- coherent signal (unsupported but sometimes works for DVI); and sync-on-green, which is a DC-bias added to the green signal.

Version table: 1.1 in 1996, 1.2 in 1997, 1.3 in 2000, 1.4 in 2006. 1.3 was the first that could handle extension blocks.

Timing priority order: Preferred timing, other detailed timings in the base block, other detailed timings in the VTB-EXT, 3-byte CVT codes in base or extended blocks, standard timings, established timings, base video mode.

Related vesa standards: CVT, DMT, DPM, DDC/CI, DPVL, MCCS, MDDI. DDC is the carrier channel for EDID. Coordinated Video Timings and Detailed Monitor Timings are various timing specs. DPM (successor to DPMS) is for Display Power Management. Basically, missing pulses on either or both sync pins plus inactive video means low power mode. DPMS was more complicated where there were standby and suspend modes between On and Off, depending on sync pin wiggling.

Extensions: CEA, VTB, DI, LS, DPVL. (consumer electronics conformance, video timing block, display information, localised strings, and digital packet video link.) CEA is the DTV profile for uncompressed high speed digital video. DDC/CI allows for control of displays, basically anything you could do from the front panel and more. MCCS is the standard command set for DDC/CI. DPVL allows for only updating the regions of the screen that have changed.

Lots of changes in EDID 1.4. (Will acquire slides for the list.)

Eamon Walsh: Security in X

(See SecurityTalkAgenda)

"The trick is to have a consistent set of lies."

Bart Massey: Cut and Paste

Something about elephants.

Cut and paste suffers from weak guidelines, data type hell, and indifference. The problems really exist, are easy to find and demo. Need to figure out requirements, then the fixes, draft the spec, write the library to make it work, and fix the visible important apps.

Amusingly enough, DND works more reliably than copy and paste. The problem space here is cut/copy/paste versus select/insert. Should work for text, pictures, and "other".

How it works: highlighting a region makes it the PRIMARY selection. You can either middle-click to paste the PRIMARY, or hit ^C to copy it to the CLIPBOARD. ^V pastes from CLIPBOARD. That would be a nice theory, but it's not consistently implemented.

Non-text selection is also completely broken.

Anyway it's all busted and it needs to be fixed. Will be starting the CCP Strike Force to make this work. Join! Do stuff!

Friday, February 9

David Reveman: Compiz

What is compiz? Compositing window manager with flexible plugin architecture.

Latest additions include multihead support and pluggable fragment shading. (shiny demo of stacked fragment plugins)

Wants to switch to software cursors. Doing this properly requires modifying the Fixes extension's reporting of cursor changes to include the sprite dimensions and hotspot.

Also wants to change the Xv interface to allow the compositing manager to do the colorspace conversion and scaling, which would be slightly more efficient in terms of copies, gets frame sync (potentially) right, etc.

Drawing synchronization. Could be done entirely client-side, but could also have server-support. Most of the server support options are fairly brutal; needs more thought. Do client side first.

Input transformation. Need it so you can interact (correctly) with transformed windows. Match the triangle primitives of the windows to an input mesh, and do the straightforward pick. Implementation is started, where Composite clients provide pairs of triangles that specify the mapping lfrom the composite window to the redirected subwindow. Minimal DIX changes to XYToWindow, WriteEventsToClient, and TranslateCoords.

Retained drawing interface. Currently have interfaces for decorations, video, thumbnails, blur-behind-window, etc. Want one common interface instead, with tree hierarchy of inheritance, extensible by current plugin architecture.

Quinn Storm: Beryl

Beryl is another GL-based compositing manager. Started as a fork of compiz, has many more visual effects, bit more experimental of a plugin interface, etc.

(mostly demo)

Brian Paul: Mesa

Memory management update. Initial development done for unified memory architectures like i915, currently working on VRAM architectures. Accelerated readback for glReadPixels and glCopyPixels not quite sorted, but soon. Also working on sub-allocator for more efficient management. White paper coming soon!

VBO changes. Will enable storing vertex data in GPU memory, avoids per-draw host-to-GPU memory transfer. All vertex-related drawing code done in one place now; glBegin/glEnd converted into VBOs, as are display lists. Simplifies life for driver writers too, including helper code for buffers larger than the hardware can handle. Todo: implement compiled vertex arrays in the same way, update the DRI drivers to use the new path.

OpenGL shading language. Mesa has kind of had this support for a while, but had no hardware support, very slow, etc. Previously had support for the ATI and nVidia extensions, but GLSL wasn't integrated with this. Shaders are clearly the way forward, so we need to get Mesa fixed to handle this.

(example shader program walkthrough.)

(shader compiler diagram. ATI_fp and friends had one front-end, one middle-end for representation, and N backends for execution. GLSL had own front-end, but different middle- and backends. New model unifies this, and adds a stage for optimization and hinting.

Kept the GLSL tokenizer/parser, but replaced the rest. Pretty much a straightforward compiler design. Need to extend the IR to handle new instructions (jump, branch) and addressing modes. Other changes needed to handle the different between ARB shader extensions and the GL 2.0 version.

GL 2.0 API interface is complete, supports most of the language except: arrays, structs, multishader linking, and integer ops. Need to implement indirect addressing for arrays. Register allocation is fair but not great. No subroutining, everything is inlined. No hardware backends updated for this yet, but not a huge job to add. Backends need to be extneded to say what instructions are supported. Error detection is kinda poor. Possible extras: profiler, histogram, debugger, peephole optimizations, etc.

Andy Ritger: Multi-GPU X Screens

Why would you want to use multiple GPUs? Solve larger problems, throw more power at large problems. SLI is one technique for splitting the scene among multiple GPUs. Xinerama is another technique for big desktop. The two are not mutually exclusive.

SLI allows multiple GPUs to render one X screen. Multiple modes: alternate frame rendering, scan line interleaving, and SLI + antialiasing.

Xinerama means two things. One is the protocol, which defines the "screen" layout. The internal implementation is the code that splits the protocol requests among multiple hardware drivers.

Unofficial terminology: the "physical" X screen is a video memory buffer in a single GPU, the "logical" X screen is the object as visible through the protocol to clients.

Use cases: Powerwalls. Caves. Large desktop. Multiple logical X screens.

What do we have today? TwinView/MergedFB, where two display devices are connected to one physical X screen (vram buffer). Multiple X screens per GPU, sort of the classic X "Zaphod" mode; allows you to advertise different capabilities per screen. Xinerama, where you have multiple physical X screens glued together into one logical X screen. RANDR 1.2 operates on a logical X screen, basically allows dynamic reconfig of MergedFB/TwinView.

What's nice about the existing Xinerama? Transparent to X drivers, and mostly works today. What's bad? Lots of resource duplication, which causes performance issues. What does it mean to redirect windows with multi-GPU X screens? RANDR and Xinerama (implementation) are mutually exclusive.

Ideas: Post RANDR 1.2, expose physical X screen in the RANDR protocol, which would allow the combination of the two. Apply DMX lessons and work to the Xinerama implementation for optimization. Expose ways for compmgrs to control what GPU receives the allocation for a redirected pixmap.

Nothing solid yet. Think about how to address these issues.

Bart Massey: XCB mini-status

It lives! 1.0 released, included in X11R7.2. Supports most of the X protocol. Team of about 6 active contributors with occasional casuals. Used for the client-side protocol libraries, but not the server (yet).

XCB is XML descriptions of the protocol, with XSLT to produce the C "top half". The C bottom-half contains the transport and multiplexing code. Xlib now built on the XCB bottom-half. Conceptualised in ~2000, originally done in m4 instead of XSLT.

There is minimal magic here, it's just protocol. Latency hiding is free, threading just works, error handling fixed, protocol docs are handy for other tools like wireshark or language bindings.

Lets you mix and match Xlib and XCB code, which allows the transition. Make the transition short, of course. It's slightly volatile and sometimes awkward to work with, the team bandwidth is slightly low, etc. But it's a good start. Still need client libraries, XSLT cleanups, get XCB into the toolkits, to use it on the server side, and to grow the team.

Question: can I handle disconnect politely with XCB? Yes, but maybe not if you're using the Xlib frontend.