- Timestamp:
- Nov 29, 2016, 5:30:22 AM (9 years ago)
- Location:
- trunk/src/os2ahci
- Files:
-
- 2 deleted
- 15 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/os2ahci/README
r177 r178 1 AHCI Driver for OS/2 v1.32 1 AHCI Driver for OS/2 v2.01 2 3 WARNING: This is a alpha level build of this driver. Use 4 in a production environment is not recommended. 2 5 3 6 … … 17 20 Copyright (c) 2011 thi.guten Software Development 18 21 Copyright (c) 2011 Mensys B.V. 19 Copyright (c) 2013-201 5David Azarewicz22 Copyright (c) 2013-2016 David Azarewicz 20 23 21 24 Authors: Christian Mueller, Markus Thielen … … 70 73 Option Description 71 74 ------------------------------------------------------------------------------ 72 / b:<baud> Initialize the COM port to the specified baud rate. Allowable75 /B:<baud> Initialize the COM port to the specified baud rate. Allowable 73 76 baud values are: 300, 600, 1200, 2400, 4800, 9600, 19200, 74 38400, 57600, and 115200. / b has no effect if /cis not also75 specified. If / bis not specified, the COM port is not77 38400, 57600, and 115200. /B has no effect if /C is not also 78 specified. If /B is not specified, the COM port is not 76 79 initialized. For example, if you are using the kernel debugger, 77 80 the kernel debugger initializes the COM port so you should not 78 81 use this switch. 79 82 80 / c:<n> Set debug COM port base address. Values for n can be:83 /C:<n> Set debug COM port base address. Values for n can be: 81 84 1 = COM1 82 85 2 = COM2 … … 84 87 The default is 0. If set to 0 then no output goes to the COM port. 85 88 86 / d[:n] Debug output to COM port/tracebuffer. Values for n can be:89 /D[:n] Debug output to COM port/debug buffer. Values for n can be: 87 90 1 = requests 88 91 2 = detailed 89 3 = verbose , including MMIO operations92 3 = verbose 90 93 If :n is not specified the debug level is incremented for 91 each / dspecified.92 93 / w Allows the tracebuffer to wrap when full.94 95 / v[:n] Display informational messages during boot. Values for n can be:94 each /D specified. 95 96 /W Allows the debug buffer to wrap when full. 97 98 /V[:n] Display informational messages during boot. Values for n can be: 96 99 1 = Display sign on banner 97 100 2 = Display adapter information 98 101 If :n is not specified the verbosity level is incremented for 99 each / vspecified.100 101 / g:<vendor>:<device> Add generic PCI ID to list of supported AHCI adapters102 (e.g. / g:8086:2829)103 104 / tPerform thorough PCI ID scan; default = on, can be105 turned off with /! tto perform only a PCI class scan106 107 / fForce the use of the HW write cache when using NCQ102 each /V specified. 103 104 /G:<vendor>:<device> Add generic PCI ID to list of supported AHCI adapters 105 (e.g. /G:8086:2829) 106 107 /T Perform thorough PCI ID scan; default = on, can be 108 turned off with /!T to perform only a PCI class scan 109 110 /F Force the use of the HW write cache when using NCQ 108 111 commands; see "Native Command Queuing" below for 109 112 further explanation (default = off) 110 113 111 / rReset ports during initialization (default = on)112 Can be turned off with /! r, however, when the114 /R Reset ports during initialization (default = on) 115 Can be turned off with /!R, however, when the 113 116 [Intel] AHCI controller was found to be 114 117 initialized by the BIOS in SATA mode, ports will 115 always be reset even when /! rwas specified116 117 / a:n Set adapter to n for adapter-specific options118 always be reset even when /!R was specified 119 120 /A:n Set adapter to n for adapter-specific options 118 121 (default = -1, all adapters) 119 122 120 / p:n Set port to n for port-specific options123 /P:n Set port to n for port-specific options 121 124 (default = -1, all ports) 122 125 123 / iIgnore current adapter if no port has been specified.126 /I Ignore current adapter if no port has been specified. 124 127 Otherwise, ignore the current port on the current adapter. 125 128 … … 128 131 Option Description 129 132 ------------------------------------------------------------------------------ 130 / sEnable SCSI emulation for ATAPI units (default = on)133 /S Enable SCSI emulation for ATAPI units (default = on) 131 134 SCSI emulation is required for tools like cdrecord. 132 135 133 / nEnable NCQ (Native Command Queuing) for hard disks136 /N Enable NCQ (Native Command Queuing) for hard disks 134 137 (default = off) 135 138 136 / lsSet link speed (default = 0):139 /LS Set link speed (default = 0): 137 140 0 = maximum, 138 141 1 = limit to generation 1 … … 140 143 3 = limit to generation 3 141 144 142 / lpSet link power management (default = 0):145 /LP Set link power management (default = 0): 143 146 0 = full power management, 144 147 1 = transitions to "partial slumber state" disabled, … … 151 154 152 155 Port-specific options depend on the currently active adapter 153 and port selector (/ a and /p). Those selectors are -1 per default156 and port selector (/A and /P). Those selectors are -1 per default 154 157 which means "all" adapters/ports. The scope can be reduced by limiting 155 it to an adapter (/ a) or an adapter and a port (/a and /p). The scope158 it to an adapter (/A) or an adapter and a port (/A and /P). The scope 156 159 can be reset by setting the corresponding option back to -1. 157 160 158 161 For example: 159 162 160 BASEDEV=OS2AHCI.ADD / n /a:0 /p:5 /!n /a:1 /p:-1 /!n163 BASEDEV=OS2AHCI.ADD /N /A:0 /P:5 /!N /A:1 /P:-1 /!N 161 164 162 165 This has the following effect: … … 183 186 disks, it's currently turned off by default until we have more feedback 184 187 from OS/2 users. In order to turn on NCQ, just add the command line 185 option "/ n" to OS2AHCI.ADD.188 option "/N" to OS2AHCI.ADD. 186 189 187 190 NCQ and HW Caches … … 195 198 performance loss. In order to prevent OS2AHCI from disabling the HW 196 199 cache when so requested by upstream code, please use the command line 197 option "/ f".200 option "/F". 198 201 199 202 This may, of course, result in data loss in case of power failures but … … 210 213 211 214 - When suspending, rebooting or shutting down, OS2AHCI always flushes 212 the HW disk cache regardless of the "/ f" or "/n" command line options.215 the HW disk cache regardless of the "/F" or "/N" command line options. 213 216 214 217 … … 218 221 There are three kinds of IDE/ATA/SATA controllers: 219 222 220 1. Legacycontrollers (IDE or SATA) without AHCI support223 1. Older controllers (IDE or SATA) without AHCI support 221 224 This kind of controller will only be recognized by IDE drivers 222 225 (IBM1S506.ADD or DANIS506.ADD). 223 226 224 2. AHCI-capable controllers which supports IDE/SATA legacyinterfaces227 2. AHCI-capable controllers which supports IDE/SATA interfaces 225 228 This kind of controller will work with IDE or AHCI drivers and it's 226 229 up to the user to decide which driver to use. … … 252 255 Assume a DELL D630 or a Thinkpad T60. The hard disk is attached to the 253 256 SATA/AHCI controller of the ICH-7 hub while the CDROM is attached to the 254 legacyPATA IDE controller. This allows two different configurations:257 PATA IDE controller. This allows two different configurations: 255 258 256 259 1. Drive HDD and CDROM via DANIS506.ADD 257 260 2. Drive HDD via OS2AHCI.ADD and CDROM via DANIS506.ADD 258 261 259 OS2AHCI.ADD can't drive the CDROM because it's attached to a legacyPATA262 OS2AHCI.ADD can't drive the CDROM because it's attached to a PATA 260 263 IDE controller which doesn't support AHCI. 261 264 … … 384 387 ========== 385 388 389 v.2.01 01-Oct-2016 - David Azarewicz 390 Major reorganization of the entire driver. 391 Enhanced debugging support. 392 386 393 v.1.32 09-Nov-2013 - David Azarewicz 387 394 Fix for some hardware that reports incorrect status … … 416 423 Added LVM aware disk geometry reporting. 417 424 Begin to add disk information report - not finished yet. 418 Removed undocumented / qswitch and made the driver quiet by default.425 Removed undocumented /Q switch and made the driver quiet by default. 419 426 Debug output improvements. 420 Added / bswitch for setting debug baud rate.427 Added /B switch for setting debug baud rate. 421 428 Fixed up time delay functions 422 429 -
trunk/src/os2ahci/ahci.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-201 5David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 43 43 /* -------------------------- function prototypes -------------------------- */ 44 44 45 static void ahci_setup_device 45 static void ahci_setup_device(AD_INFO *ai, int p, int d, u16 *id_buf); 46 46 47 47 /* ------------------------ global/static variables ------------------------ */ … … 58 58 * pointers to all handler functions which may need to be overridden. 59 59 */ 60 u16 initial_flags[] = { 60 u16 initial_flags[] = 61 { 61 62 0, /* board_ahci */ 62 63 AHCI_HFLAG_NO_NCQ | /* board_ahci_vt8251 */ … … 84 85 * index to the corresponding IRQ. 85 86 */ 86 static u16 87 static int 87 static u16 irq_map[MAX_AD]; /* IRQ level for each stub IRQ func */ 88 static int irq_map_cnt; /* number of IRQ stub funcs used */ 88 89 89 90 /* ----------------------------- start of code ----------------------------- */ … … 106 107 * bit 0 will be set when the interrupt was not handled. 107 108 */ 108 #define call_ahci_intr(i) return(ahci_intr(irq_map[i]) >> 1) 109 110 static USHORT _cdecl _far irq_handler_00(void) { call_ahci_intr(0); } 111 static USHORT _cdecl _far irq_handler_01(void) { call_ahci_intr(1); } 112 static USHORT _cdecl _far irq_handler_02(void) { call_ahci_intr(2); } 113 static USHORT _cdecl _far irq_handler_03(void) { call_ahci_intr(3); } 114 static USHORT _cdecl _far irq_handler_04(void) { call_ahci_intr(4); } 115 static USHORT _cdecl _far irq_handler_05(void) { call_ahci_intr(5); } 116 static USHORT _cdecl _far irq_handler_06(void) { call_ahci_intr(6); } 117 static USHORT _cdecl _far irq_handler_07(void) { call_ahci_intr(7); } 118 119 PFN irq_handlers[] = { 109 #define call_ahci_intr(i) return(ahci_intr(irq_map[i])) 110 111 static USHORT _cdecl irq_handler_00(void) { call_ahci_intr(0); } 112 static USHORT _cdecl irq_handler_01(void) { call_ahci_intr(1); } 113 static USHORT _cdecl irq_handler_02(void) { call_ahci_intr(2); } 114 static USHORT _cdecl irq_handler_03(void) { call_ahci_intr(3); } 115 static USHORT _cdecl irq_handler_04(void) { call_ahci_intr(4); } 116 static USHORT _cdecl irq_handler_05(void) { call_ahci_intr(5); } 117 static USHORT _cdecl irq_handler_06(void) { call_ahci_intr(6); } 118 static USHORT _cdecl irq_handler_07(void) { call_ahci_intr(7); } 119 120 PFN irq_handlers[] = 121 { 120 122 (PFN) irq_handler_00, (PFN) irq_handler_01, (PFN) irq_handler_02, 121 123 (PFN) irq_handler_03, (PFN) irq_handler_04, (PFN) irq_handler_05, … … 123 125 }; 124 126 127 #ifdef DEBUG 125 128 void ahci_dump_host_regs(AD_INFO *ai, int bios_regs) 126 129 { 127 #ifdef DEBUG128 130 int i; 129 131 u32 version; 130 132 131 aprintf("AHCI global registers for adapter %d %d:%d:%d irq=%d addr=0x%lx\n", 132 ad_no(ai), ai->bus, ai->dev_func>>3, ai->dev_func&7, ai->irq, ai->mmio_phys); 133 DPRINTF(2,"AHCI global registers for adapter %d %d:%d:%d irq=%d addr=0x%x\n", 134 ad_no(ai), 135 PCI_BUS_FROM_BDF(ai->bus_dev_func), PCI_DEV_FROM_BDF(ai->bus_dev_func), 136 PCI_FUNC_FROM_BDF(ai->bus_dev_func), ai->irq, ai->mmio_phys); 133 137 134 138 for (i = 0; i <= HOST_CAP2; i += sizeof(u32)) { … … 144 148 if (i == HOST_VERSION) version = val; 145 149 146 ntprintf(" %02x: %08lx", i, val);150 NTPRINTF(" %02x: %08lx", i, val); 147 151 148 152 if (i == HOST_CAP) { 149 ntprintf(" -");150 if (val & HOST_CAP_64) ntprintf(" 64bit");151 if (val & HOST_CAP_NCQ) ntprintf(" ncq");152 if (val & HOST_CAP_SNTF) ntprintf(" sntf");153 if (val & HOST_CAP_MPS) ntprintf(" mps");154 if (val & HOST_CAP_SSS) ntprintf(" sss");155 if (val & HOST_CAP_ALPM) ntprintf(" alpm");156 if (val & HOST_CAP_LED) ntprintf(" led");157 if (val & HOST_CAP_CLO) ntprintf(" clo");158 if (val & HOST_CAP_ONLY) ntprintf(" ahci_only");159 if (val & HOST_CAP_PMP) ntprintf(" pmp");160 if (val & HOST_CAP_FBS) ntprintf(" fbs");161 if (val & HOST_CAP_PIO_MULTI) ntprintf(" pio_multi");162 if (val & HOST_CAP_SSC) ntprintf(" ssc");163 if (val & HOST_CAP_PART) ntprintf(" part");164 if (val & HOST_CAP_CCC) ntprintf(" ccc");165 if (val & HOST_CAP_EMS) ntprintf(" ems");166 if (val & HOST_CAP_SXS) ntprintf(" sxs");167 ntprintf(" cmd_slots:%d", (u16)((val >> 8) & 0x1f) + 1);168 ntprintf(" ports:%d", (u16)(val & 0x1f) + 1);153 NTPRINTF(" -"); 154 if (val & HOST_CAP_64) NTPRINTF(" 64bit"); 155 if (val & HOST_CAP_NCQ) NTPRINTF(" ncq"); 156 if (val & HOST_CAP_SNTF) NTPRINTF(" sntf"); 157 if (val & HOST_CAP_MPS) NTPRINTF(" mps"); 158 if (val & HOST_CAP_SSS) NTPRINTF(" sss"); 159 if (val & HOST_CAP_ALPM) NTPRINTF(" alpm"); 160 if (val & HOST_CAP_LED) NTPRINTF(" led"); 161 if (val & HOST_CAP_CLO) NTPRINTF(" clo"); 162 if (val & HOST_CAP_ONLY) NTPRINTF(" ahci_only"); 163 if (val & HOST_CAP_PMP) NTPRINTF(" pmp"); 164 if (val & HOST_CAP_FBS) NTPRINTF(" fbs"); 165 if (val & HOST_CAP_PIO_MULTI) NTPRINTF(" pio_multi"); 166 if (val & HOST_CAP_SSC) NTPRINTF(" ssc"); 167 if (val & HOST_CAP_PART) NTPRINTF(" part"); 168 if (val & HOST_CAP_CCC) NTPRINTF(" ccc"); 169 if (val & HOST_CAP_EMS) NTPRINTF(" ems"); 170 if (val & HOST_CAP_SXS) NTPRINTF(" sxs"); 171 NTPRINTF(" cmd_slots:%d", ((val >> 8) & 0x1f) + 1); 172 NTPRINTF(" ports:%d", (val & 0x1f) + 1); 169 173 } else if (i == HOST_CTL) { 170 ntprintf(" -");171 if (val & HOST_AHCI_EN) ntprintf(" ahci_enabled");172 if (val & HOST_IRQ_EN) ntprintf(" irq_enabled");173 if (val & HOST_RESET) ntprintf(" resetting");174 NTPRINTF(" -"); 175 if (val & HOST_AHCI_EN) NTPRINTF(" ahci_enabled"); 176 if (val & HOST_IRQ_EN) NTPRINTF(" irq_enabled"); 177 if (val & HOST_RESET) NTPRINTF(" resetting"); 174 178 } else if (i == HOST_CAP2) { 175 ntprintf(" -"); 176 if (val & HOST_CAP2_BOH) ntprintf(" boh"); 177 if (val & HOST_CAP2_NVMHCI) ntprintf(" nvmhci"); 178 if (val & HOST_CAP2_APST) ntprintf(" apst"); 179 } 180 ntprintf("\n"); 181 } 182 #endif 179 NTPRINTF(" -"); 180 if (val & HOST_CAP2_BOH) NTPRINTF(" boh"); 181 if (val & HOST_CAP2_NVMHCI) NTPRINTF(" nvmhci"); 182 if (val & HOST_CAP2_APST) NTPRINTF(" apst"); 183 } 184 NTPRINTF("\n"); 185 } 183 186 } 184 187 185 188 void ahci_dump_port_regs(AD_INFO *ai, int p) 186 189 { 187 #ifdef DEBUG 188 u8 _far *port_mmio = port_base(ai, p); 189 190 aprintf("AHCI port %d registers:\n", p); 191 ntprintf(" PORT_CMD = 0x%lx\n", readl(port_mmio + PORT_CMD)); 192 ntprintf("command engine status:\n"); 193 ntprintf(" PORT_SCR_ACT = 0x%lx\n", readl(port_mmio + PORT_SCR_ACT)); 194 ntprintf(" PORT_CMD_ISSUE = 0x%lx\n", readl(port_mmio + PORT_CMD_ISSUE)); 195 ntprintf("link/device status:\n"); 196 ntprintf(" PORT_SCR_STAT = 0x%lx\n", readl(port_mmio + PORT_SCR_STAT)); 197 ntprintf(" PORT_SCR_CTL = 0x%lx\n", readl(port_mmio + PORT_SCR_CTL)); 198 ntprintf(" PORT_SCR_ERR = 0x%lx\n", readl(port_mmio + PORT_SCR_ERR)); 199 ntprintf(" PORT_TFDATA = 0x%lx\n", readl(port_mmio + PORT_TFDATA)); 200 ntprintf("interrupt status:\n"); 201 ntprintf(" PORT_IRQ_STAT = 0x%lx\n", readl(port_mmio + PORT_IRQ_STAT)); 202 ntprintf(" PORT_IRQ_MASK = 0x%lx\n", readl(port_mmio + PORT_IRQ_MASK)); 203 ntprintf(" HOST_IRQ_STAT = 0x%lx\n", readl(ai->mmio + HOST_IRQ_STAT)); 204 #endif 205 } 190 u8 *port_mmio = port_base(ai, p); 191 192 dprintf(0,"AHCI port %d registers:\n", p); 193 dprintf(0," PORT_CMD = 0x%x\n", readl(port_mmio + PORT_CMD)); 194 dprintf(0," command engine status:\n"); 195 dprintf(0," PORT_SCR_ACT = 0x%x\n", readl(port_mmio + PORT_SCR_ACT)); 196 dprintf(0," PORT_CMD_ISSUE = 0x%x\n", readl(port_mmio + PORT_CMD_ISSUE)); 197 dprintf(0," link/device status:\n"); 198 dprintf(0," PORT_SCR_STAT = 0x%x\n", readl(port_mmio + PORT_SCR_STAT)); 199 dprintf(0," PORT_SCR_CTL = 0x%x\n", readl(port_mmio + PORT_SCR_CTL)); 200 dprintf(0," PORT_SCR_ERR = 0x%x\n", readl(port_mmio + PORT_SCR_ERR)); 201 dprintf(0," PORT_TFDATA = 0x%x\n", readl(port_mmio + PORT_TFDATA)); 202 dprintf(0," interrupt status:\n"); 203 dprintf(0," PORT_IRQ_STAT = 0x%x\n", readl(port_mmio + PORT_IRQ_STAT)); 204 dprintf(0," PORT_IRQ_MASK = 0x%x\n", readl(port_mmio + PORT_IRQ_MASK)); 205 dprintf(0," HOST_IRQ_STAT = 0x%x\n", readl(ai->mmio + HOST_IRQ_STAT)); 206 } 207 #endif 206 208 207 209 /****************************************************************************** … … 221 223 222 224 /* save BIOS configuration */ 223 for (i = 0; i < HOST_CAP2; i += sizeof(u32)) { 225 for (i = 0; i < HOST_CAP2; i += sizeof(u32)) 226 { 224 227 ai->bios_config[i / sizeof(u32)] = readl(ai->mmio + i); 225 228 } 226 229 227 ddprintf("ahci_save_bios_config: BIOS AHCI mode is %d\n", ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN);230 DPRINTF(3,"ahci_save_bios_config: BIOS AHCI mode is %d\n", ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN); 228 231 229 232 /* HOST_CAP2 only exists for AHCI V1.2 and later */ 230 if (ai->bios_config[HOST_VERSION / sizeof(u32)] >= 0x00010200L) { 233 if (ai->bios_config[HOST_VERSION / sizeof(u32)] >= 0x00010200L) 234 { 231 235 ai->bios_config[HOST_CAP2 / sizeof(u32)] = readl(ai->mmio + HOST_CAP2); 232 } else { 236 } 237 else 238 { 233 239 ai->bios_config[HOST_CAP2 / sizeof(u32)] = 0; 234 240 } 235 241 236 242 if ((ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN) == 0 && 237 ai->pci_vendor == PCI_VENDOR_ID_INTEL) { 243 ai->pci_vendor == PCI_VENDOR_ID_INTEL) 244 { 238 245 /* Adapter is not in AHCI mode and the spec says a COMRESET is 239 246 * required when switching from SATA to AHCI mode and vice versa. … … 242 249 } 243 250 244 #ifdef DEBUG 245 /* print AHCI register debug information */ 246 if (debug) ahci_dump_host_regs(ai, 1); 247 #endif 251 DUMP_HOST_REGS(2,ai,1); 248 252 249 253 /* Save working copies of CAP, CAP2 and port_map and remove broken feature … … 255 259 ai->port_map = ai->bios_config[HOST_PORTS_IMPL / sizeof(u32)]; 256 260 257 if (ai->pci->board >= sizeof(initial_flags) / sizeof(*initial_flags)) { 258 dprintf("error: invalid board index in PCI info\n"); 261 if (ai->pci->board >= sizeof(initial_flags) / sizeof(*initial_flags)) 262 { 263 DPRINTF(0,"error: invalid board index in PCI info\n"); 259 264 return(-1); 260 265 } … … 262 267 ai->hw_ports = (ai->cap & 0x1f) + 1; 263 268 264 if ((ai->cap & HOST_CAP_64) && (ai->flags & AHCI_HFLAG_32BIT_ONLY)) { 269 if ((ai->cap & HOST_CAP_64) && (ai->flags & AHCI_HFLAG_32BIT_ONLY)) 270 { 265 271 /* disable 64-bit support for faulty controllers; OS/2 can't do 64 bits at 266 272 * this point, of course, but who knows where all this will be in a few … … 270 276 } 271 277 272 if ((ai->cap & HOST_CAP_NCQ) && (ai->flags & AHCI_HFLAG_NO_NCQ)) { 273 dprintf("controller can't do NCQ, turning off CAP_NCQ\n"); 278 if ((ai->cap & HOST_CAP_NCQ) && (ai->flags & AHCI_HFLAG_NO_NCQ)) 279 { 280 DPRINTF(1,"controller can't do NCQ, turning off CAP_NCQ\n"); 274 281 ai->cap &= ~HOST_CAP_NCQ; 275 282 } 276 283 277 if (!(ai->cap & HOST_CAP_NCQ) && (ai->flags & AHCI_HFLAG_YES_NCQ)) { 278 dprintf("controller can do NCQ, turning on CAP_NCQ\n"); 284 if (!(ai->cap & HOST_CAP_NCQ) && (ai->flags & AHCI_HFLAG_YES_NCQ)) 285 { 286 DPRINTF(1,"controller can do NCQ, turning on CAP_NCQ\n"); 279 287 ai->cap |= HOST_CAP_NCQ; 280 288 } 281 289 282 if ((ai->cap & HOST_CAP_PMP) && (ai->flags & AHCI_HFLAG_NO_PMP)) { 283 dprintf("controller can't do PMP, turning off CAP_PMP\n"); 290 if ((ai->cap & HOST_CAP_PMP) && (ai->flags & AHCI_HFLAG_NO_PMP)) 291 { 292 DPRINTF(1,"controller can't do PMP, turning off CAP_PMP\n"); 284 293 ai->cap |= HOST_CAP_PMP; 285 294 } 286 295 287 if ((ai->cap & HOST_CAP_SNTF) && (ai->flags & AHCI_HFLAG_NO_SNTF)) { 288 dprintf("controller can't do SNTF, turning off CAP_SNTF\n"); 296 if ((ai->cap & HOST_CAP_SNTF) && (ai->flags & AHCI_HFLAG_NO_SNTF)) 297 { 298 DPRINTF(1,"controller can't do SNTF, turning off CAP_SNTF\n"); 289 299 ai->cap &= ~HOST_CAP_SNTF; 290 300 } 291 301 292 if (ai->pci_vendor == PCI_VENDOR_ID_JMICRON && 293 ai->pci_device == 0x2361 && ai->port_map != 1){294 dprintf("JMB361 has only one port, port_map 0x%lx -> 0x%lx\n", ai->port_map, 1);302 if (ai->pci_vendor == PCI_VENDOR_ID_JMICRON && ai->pci_device == 0x2361 && ai->port_map != 1) 303 { 304 DPRINTF(1,"JMB361 has only one port, port_map 0x%x -> 0x%x\n", ai->port_map, 1); 295 305 ai->port_map = 1; 296 306 ai->hw_ports = 1; … … 307 317 */ 308 318 ports = ai->hw_ports; 309 for (i = 0; i < AHCI_MAX_PORTS; i++) { 310 if (ai->port_map & (1UL << i)) { 319 for (i = 0; i < AHCI_MAX_PORTS; i++) 320 { 321 if (ai->port_map & (1UL << i)) 322 { 311 323 ports--; 312 324 } 313 325 } 314 if (ports < 0) { 326 if (ports < 0) 327 { 315 328 /* more ports in port_map than in HOST_CAP & 0x1f */ 316 329 ports = ai->hw_ports; 317 dprintf("implemented port map (0x%lx) contains more ports than nr_ports (%d), using nr_ports\n", ai->port_map, ports);330 DPRINTF(1,"implemented port map (0x%x) contains more ports than nr_ports (%d), using nr_ports\n", ai->port_map, ports); 318 331 ai->port_map = (1UL << ports) - 1UL; 319 332 } 320 333 321 334 /* set maximum command slot number */ 322 ai->cmd_max = ( u16) ((ai->cap >> 8) & 0x1f);335 ai->cmd_max = ((ai->cap >> 8) & 0x1f); 323 336 324 337 return(0); … … 332 345 int ahci_restore_bios_config(AD_INFO *ai) 333 346 { 334 ddprintf("ahci_restore_bios_config: restoring AHCI BIOS configuration on adapter %d\n", ad_no(ai));347 DPRINTF(3,"ahci_restore_bios_config: restoring AHCI BIOS configuration on adapter %d\n", ad_no(ai)); 335 348 336 349 /* Restore saved BIOS configuration; please note that HOST_CTL is restored … … 345 358 readl(ai->mmio + HOST_CTL); 346 359 347 if ((ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN) == 0 && ai->pci_vendor == PCI_VENDOR_ID_INTEL) {348 360 if ((ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN) == 0 && ai->pci_vendor == PCI_VENDOR_ID_INTEL) 361 { 349 362 /* This BIOS apparently accesses the controller via SATA registers and 350 363 * the AHCI spec says that we should issue a COMRESET on each port after … … 361 374 int p; 362 375 363 for (p = 0; p < AHCI_MAX_PORTS; p++) { 364 if (ai->port_map & (1UL << p)) { 365 u8 _far *port_mmio = port_base(ai, p); 376 for (p = 0; p < AHCI_MAX_PORTS; p++) 377 { 378 if (ai->port_map & (1UL << p)) 379 { 380 u8 *port_mmio = port_base(ai, p); 366 381 u32 tmp; 367 382 … … 388 403 int ahci_restore_initial_config(AD_INFO *ai) 389 404 { 390 ddprintf("ahci_restore_initial_config: restoring initial configuration on adapter %d\n", ad_no(ai));405 DPRINTF(3,"ahci_restore_initial_config: restoring initial configuration on adapter %d\n", ad_no(ai)); 391 406 392 407 /* restore saved BIOS configuration */ … … 413 428 TIMER Timer; 414 429 415 dprintf("controller reset starting on adapter %d\n", ad_no(ai));430 DPRINTF(2,"controller reset starting on adapter %d\n", ad_no(ai)); 416 431 417 432 /* we must be in AHCI mode, before using anything AHCI-specific, such as HOST_RESET. */ … … 431 446 * the hardware should be considered fried. 432 447 */ 433 timer_init(&Timer, 1000);448 TimerInit(&Timer, 1000); 434 449 while (((tmp = readl(ai->mmio + HOST_CTL)) & HOST_RESET) != 0) { 435 if ( timer_check_and_block(&Timer)) {436 dprintf("controller reset failed (0x%lx)\n", tmp);450 if (TimerCheckAndBlock(&Timer)) { 451 DPRINTF(0,"controller reset failed (0x%x)\n", tmp); 437 452 return(-1); 438 453 } … … 448 463 u32 tmp16 = 0; 449 464 450 ddprintf("ahci_reset_controller: intel detected\n");465 DPRINTF(1,"ahci_reset_controller: intel detected\n"); 451 466 /* configure PCS */ 452 pci_read_conf(ai->bus, ai->dev_func, 0x92, sizeof(u16), &tmp16);467 PciReadConfig(ai->bus, ai->dev_func, 0x92, sizeof(u16), &tmp16); 453 468 if ((tmp16 & ai->port_map) != ai->port_map) { 454 ddprintf("ahci_reset_controller: updating PCS %x/%x\n", (u16)tmp16, ai->port_map);469 DPRINTF(3,"ahci_reset_controller: updating PCS %x/%x\n", tmp16, ai->port_map); 455 470 tmp16 |= ai->port_map; 456 pci_write_conf(ai->bus, ai->dev_func, 0x92, sizeof(u16), tmp16);471 PciWriteConfig(ai->bus, ai->dev_func, 0x92, sizeof(u16), tmp16); 457 472 } 458 473 } … … 472 487 { 473 488 AHCI_PORT_CFG *pc; 474 u8 _far *port_mmio = port_base(ai, p); 475 476 if ((pc = malloc(sizeof(*pc))) == NULL) { 477 return(NULL); 478 } 489 u8 *port_mmio = port_base(ai, p); 490 491 if ((pc = MemAlloc(sizeof(*pc))) == NULL) return(NULL); 479 492 480 493 pc->cmd_list = readl(port_mmio + PORT_LST_ADDR); … … 496 509 void ahci_restore_port_config(AD_INFO *ai, int p, AHCI_PORT_CFG *pc) 497 510 { 498 u8 _far*port_mmio = port_base(ai, p);511 u8 *port_mmio = port_base(ai, p); 499 512 500 513 /* stop the port, first */ 501 514 ahci_stop_port(ai, p); 502 515 503 if (ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN) { 516 if (ai->bios_config[HOST_CTL / sizeof(u32)] & HOST_AHCI_EN) 517 { 504 518 /* BIOS uses AHCI, too, so we need to restore the port settings; 505 519 * restoring PORT_CMD may well start the port again but that's what … … 516 530 } 517 531 518 free(pc);532 MemFree(pc); 519 533 } 520 534 … … 527 541 int i; 528 542 529 if (ctl & HOST_AHCI_EN) { 543 if (ctl & HOST_AHCI_EN) 544 { 530 545 /* AHCI mode already enabled */ 531 546 return(0); … … 533 548 534 549 /* some controllers need AHCI_EN to be written multiple times */ 535 for (i = 0; i < 5; i++) { 550 for (i = 0; i < 5; i++) 551 { 536 552 ctl |= HOST_AHCI_EN; 537 553 writel(ai->mmio + HOST_CTL, ctl); 538 554 ctl = readl(ai->mmio + HOST_CTL); /* flush && sanity check */ 539 if (ctl & HOST_AHCI_EN) { 555 if (ctl & HOST_AHCI_EN) 556 { 540 557 return(0); 541 558 } … … 544 561 545 562 /* couldn't enable AHCI mode */ 546 dprintf("failed to enable AHCI mode on adapter %d\n", ad_no(ai));563 DPRINTF(0,"failed to enable AHCI mode on adapter %d\n", ad_no(ai)); 547 564 return(1); 548 565 } … … 575 592 TIMER Timer; 576 593 577 if ((id_buf = malloc(ATA_ID_WORDS * sizeof(u16))) == NULL) { 578 return(-1); 579 } 580 581 if (ai->bios_config[0] == 0) { 582 /* first call */ 583 ahci_save_bios_config(ai); 584 } 585 586 if (ahci_enable_ahci(ai)) { 587 goto exit_port_scan; 588 } 594 if ((id_buf = MemAlloc(ATA_ID_WORDS * sizeof(u16))) == NULL) return(-1); 595 596 if (ai->bios_config[0] == 0) ahci_save_bios_config(ai); /* first call */ 597 598 if (ahci_enable_ahci(ai)) goto exit_port_scan; 589 599 590 600 /* perform port scan */ 591 dprintf("ahci_scan_ports: scanning ports on adapter %d\n", ad_no(ai)); 592 for (p = 0; p < AHCI_MAX_PORTS; p++) { 601 DPRINTF(1,"ahci_scan_ports: scanning ports on adapter %d\n", ad_no(ai)); 602 for (p = 0; p < AHCI_MAX_PORTS; p++) 603 { 593 604 if (!(ai->port_map & (1UL << p))) continue; 594 605 if (port_ignore[ad_no(ai)][p]) continue; 595 606 596 ddprintf("ahci_scan_ports: Wait till not busy on port %d\n", p);607 DPRINTF(3,"ahci_scan_ports: Wait till not busy on port %d\n", p); 597 608 /* wait until all active commands have completed on this port */ 598 timer_init(&Timer, 250);599 while (ahci_port_busy(ai, p)) {600 if (timer_check_and_block(&Timer)) break;601 }602 603 if (!init_complete) { 604 if ((pc = ahci_save_port_config(ai, p)) == NULL) {605 goto exit_port_scan;606 }609 TimerInit(&Timer, 250); 610 while (ahci_port_busy(ai, p)) 611 { 612 if (TimerCheckAndBlock(&Timer)) break; 613 } 614 615 if (!init_complete) 616 { 617 if ((pc = ahci_save_port_config(ai, p)) == NULL) goto exit_port_scan; 607 618 } 608 619 609 620 /* start/reset port; if no device is attached, this is expected to fail */ 610 if (init_reset) { 621 if (init_reset) 622 { 611 623 rc = ahci_reset_port(ai, p, 0); 612 } else { 613 ddprintf("ahci_scan_ports: (re)starting port %d\n", p); 624 } 625 else 626 { 627 DPRINTF(3,"ahci_scan_ports: (re)starting port %d\n", p); 614 628 ahci_stop_port(ai, p); 615 629 rc = ahci_start_port(ai, p, 0); 616 630 } 617 if (rc) { 631 if (rc) 632 { 618 633 /* no device attached to this port */ 619 634 ai->port_map &= ~(1UL << p); … … 622 637 623 638 /* this port seems to have a device attached and ready for commands */ 624 ddprintf("ahci_scan_ports: port %d seems to be attached to a device; probing...\n", p);639 DPRINTF(1,"ahci_scan_ports: port %d seems to be attached to a device; probing...\n", p); 625 640 626 641 /* Get ATA(PI) identity. The so-called signature gives us a hint whether … … 632 647 */ 633 648 is_ata = readl(port_base(ai, p) + PORT_SIG) == 0x00000101UL; 634 for (i = 0; i < 2; i++) { 649 for (i = 0; i < 2; i++) 650 { 635 651 rc = ahci_exec_polled_cmd(ai, p, 0, 500, 636 652 (is_ata) ? ATA_CMD_ID_ATA : ATA_CMD_ID_ATAPI, 637 AP_VADDR, (void _far *) id_buf, 512,653 AP_VADDR, (void *) id_buf, ATA_ID_WORDS * sizeof(u16), 638 654 AP_END); 639 if (rc == 0) { 640 break; 641 } 655 if (rc == 0) break; 642 656 643 657 /* try again with ATA/ATAPI swapped */ … … 645 659 } 646 660 647 if (rc == 0) { 661 if (rc == 0) 662 { 648 663 /* we have a valid IDENTIFY or IDENTIFY_PACKET response */ 649 ddphex(id_buf, 512, "ATA_IDENTIFY%s results:\n", (is_ata) ? "" : "_PACKET");664 DHEXDUMP(2,id_buf, ATA_ID_WORDS * sizeof(u16), "ATA_IDENTIFY%s results:\n", (is_ata) ? "" : "_PACKET"); 650 665 ahci_setup_device(ai, p, 0, id_buf); 651 } else { 666 } 667 else 668 { 652 669 /* no device attached to this port */ 653 670 ai->port_map &= ~(1UL << p); … … 661 678 662 679 exit_port_scan: 663 if (!init_complete) { 680 if (!init_complete) 681 { 664 682 ahci_restore_bios_config(ai); 665 683 } 666 free(id_buf);684 MemFree(id_buf); 667 685 return(0); 668 686 } … … 676 694 { 677 695 int rc; 678 intp;696 u32 p; 679 697 int i; 680 698 681 dprintf("ahci_complete_init: completing initialization of adapter #%d\n", ad_no(ai));699 DPRINTF(1,"ahci_complete_init: completing initialization of adapter #%d\n", ad_no(ai)); 682 700 683 701 /* register IRQ handlers; each IRQ level is registered only once */ 684 for (i = 0; i < irq_map_cnt; i++) { 685 if (irq_map[i] == ai->irq) { 686 /* we already have this IRQ registered */ 687 break; 688 } 689 } 690 if (i >= irq_map_cnt) { 691 dprintf("registering interrupt #%d\n", ai->irq); 692 if (DevHelp_SetIRQ(mk_NPFN(irq_handlers[irq_map_cnt]), ai->irq, 1) != 0) { 693 dprintf("failed to register shared interrupt\n"); 694 if (DevHelp_SetIRQ(mk_NPFN(irq_handlers[irq_map_cnt]), ai->irq, 0) != 0) { 695 dprintf("failed to register exclusive interrupt\n"); 702 for (i = 0; i < irq_map_cnt; i++) 703 { 704 if (irq_map[i] == ai->irq) break; /* we already have this IRQ registered */ 705 } 706 if (i >= irq_map_cnt) 707 { 708 DPRINTF(2,"registering interrupt #%d\n", ai->irq); 709 if (Dev32Help_SetIRQ(irq_handlers[irq_map_cnt], ai->irq, 1) != 0) 710 { 711 DPRINTF(0,"failed to register shared interrupt\n"); 712 if (Dev32Help_SetIRQ(irq_handlers[irq_map_cnt], ai->irq, 0) != 0) 713 { 714 DPRINTF(0,"failed to register exclusive interrupt\n"); 696 715 return(-1); 697 716 } … … 701 720 702 721 /* enable AHCI mode */ 703 if ((rc = ahci_enable_ahci(ai)) != 0) { 704 return(rc); 705 } 722 if ((rc = ahci_enable_ahci(ai)) != 0) return(rc); 706 723 707 724 /* Start all ports. The main purpose is to set the command list and FIS … … 710 727 * enough if a previously detected device has problems. 711 728 */ 712 for (p = 0; p < AHCI_MAX_PORTS; p++) { 713 if (ai->port_map & (1UL << p)) { 714 if (init_reset) { 715 ddprintf("ahci_complete_init: resetting port %d\n", p); 729 for (p = 0; p < AHCI_MAX_PORTS; p++) 730 { 731 if (ai->port_map & (1UL << p)) 732 { 733 if (init_reset) 734 { 735 DPRINTF(3,"ahci_complete_init: resetting port %d\n", p); 716 736 ahci_reset_port(ai, p, 1); 717 } else { 718 ddprintf("ahci_complete_init: restarting port #%d\n", p); 737 } 738 else 739 { 740 DPRINTF(3,"ahci_complete_init: restarting port #%d\n", p); 719 741 ahci_stop_port(ai, p); 720 742 ahci_start_port(ai, p, 1); … … 734 756 /* pci_enable_int(ai->bus, ai->dev_func); */ 735 757 758 DPRINTF(1,"ahci_complete_init: done\n"); 736 759 return(0); 737 760 } … … 751 774 int ahci_reset_port(AD_INFO *ai, int p, int ei) 752 775 { 753 u8 _far*port_mmio = port_base(ai, p);776 u8 *port_mmio = port_base(ai, p); 754 777 u32 tmp; 755 778 TIMER Timer; 756 779 757 dprintf("ahci_reset_port: resetting port %d.%d\n", ad_no(ai), p);758 if (debug > 1) ahci_dump_port_regs(ai,p);780 DPRINTF(2,"ahci_reset_port: resetting port %d.%d\n", ad_no(ai), p); 781 DUMP_PORT_REGS(2,ai,p); 759 782 760 783 /* stop port engines (we don't care whether there is an error doing so) */ … … 772 795 773 796 /* set link speed and power management options */ 774 ddprintf("ahci_reset_port: setting link speed and power management options\n");797 DPRINTF(3,"ahci_reset_port: setting link speed and power management options\n"); 775 798 tmp = readl(port_mmio + PORT_SCR_CTL) & ~0x00000fffUL; 776 tmp |= ( (u32)link_speed[ad_no(ai)][p] & 0x0f) << 4;777 tmp |= ( (u32)link_power[ad_no(ai)][p] & 0x0f) << 8;799 tmp |= (link_speed[ad_no(ai)][p] & 0x0f) << 4; 800 tmp |= (link_power[ad_no(ai)][p] & 0x0f) << 8; 778 801 writel(port_mmio + PORT_SCR_CTL, tmp); 779 802 780 803 /* issue COMRESET on the port */ 781 ddprintf("ahci_reset_port: issuing COMRESET on port %d\n", p);804 DPRINTF(3,"ahci_reset_port: issuing COMRESET on port %d\n", p); 782 805 writel(port_mmio + PORT_SCR_CTL, tmp | 1); 783 806 readl(port_mmio + PORT_SCR_CTL); /* flush */ … … 790 813 791 814 /* wait for communication to be re-established after port reset */ 792 dprintf("Wait for communication...\n"); 793 timer_init(&Timer, 500); 794 while (((tmp = readl(port_mmio + PORT_SCR_STAT)) & 3) != 3) { 795 if (timer_check_and_block(&Timer)) { 796 dprintf("no device present after resetting port #%d (PORT_SCR_STAT = 0x%lx)\n", p, tmp); 815 DPRINTF(2,"Wait for communication...\n"); 816 TimerInit(&Timer, 500); 817 while (((tmp = readl(port_mmio + PORT_SCR_STAT)) & 3) != 3) 818 { 819 if (TimerCheckAndBlock(&Timer)) 820 { 821 DPRINTF(0,"no device present after resetting port #%d (PORT_SCR_STAT = 0x%x)\n", p, tmp); 797 822 return(-1); 798 823 } … … 804 829 805 830 /* start port so we can receive the COMRESET FIS */ 806 dprintf("ahci_reset_port: starting port %d again\n", p);831 DPRINTF(2,"ahci_reset_port: starting port %d again\n", p); 807 832 ahci_start_port(ai, p, ei); 808 833 809 834 /* wait for device to be ready ((PxTFD & (BSY | DRQ | ERR)) == 0) */ 810 timer_init(&Timer, 1000); 811 while (((tmp = readl(port_mmio + PORT_TFDATA)) & 0x89) != 0) { 812 if (timer_check_and_block(&Timer)) { 813 dprintf("device not ready on port #%d (PORT_TFDATA = 0x%lx)\n", p, tmp); 835 TimerInit(&Timer, 1000); 836 while (((tmp = readl(port_mmio + PORT_TFDATA)) & 0x89) != 0) 837 { 838 if (TimerCheckAndBlock(&Timer)) 839 { 840 DPRINTF(0,"device not ready on port #%d (PORT_TFDATA = 0x%x)\n", p, tmp); 814 841 ahci_stop_port(ai, p); 815 842 return(-1); 816 843 } 817 844 } 818 ddprintf("ahci_reset_port: PORT_TFDATA = 0x%lx\n", readl(port_mmio + PORT_TFDATA));845 DPRINTF(3,"ahci_reset_port: PORT_TFDATA = 0x%x\n", readl(port_mmio + PORT_TFDATA)); 819 846 820 847 return(0); … … 826 853 int ahci_start_port(AD_INFO *ai, int p, int ei) 827 854 { 828 u8 _far*port_mmio = port_base(ai, p);855 u8 *port_mmio = port_base(ai, p); 829 856 u32 status; 830 857 831 ddprintf("ahci_start_port %d.%d\n", ad_no(ai), p);858 DPRINTF(3,"ahci_start_port %d.%d\n", ad_no(ai), p); 832 859 /* check whether device presence is detected and link established */ 833 860 834 861 status = readl(port_mmio + PORT_SCR_STAT); 835 ddprintf("ahci_start_port: PORT_SCR_STAT = 0x%lx\n", status); 836 if ((status & 0xf) != 3) { 837 return(-1); 838 } 862 DPRINTF(3,"ahci_start_port: PORT_SCR_STAT = 0x%x\n", status); 863 if ((status & 0xf) != 3) return(-1); 839 864 840 865 /* clear SError, if any */ 841 866 status = readl(port_mmio + PORT_SCR_ERR); 842 ddprintf("ahci_start_port: PORT_SCR_ERR = 0x%lx\n", status);867 DPRINTF(3,"ahci_start_port: PORT_SCR_ERR = 0x%x\n", status); 843 868 writel(port_mmio + PORT_SCR_ERR, status); 844 869 … … 849 874 ahci_start_engine(ai, p); 850 875 851 if (ei) { 876 if (ei) 877 { 852 878 /* clear any pending interrupts on this port */ 853 if ((status = readl(port_mmio + PORT_IRQ_STAT)) != 0) { 879 if ((status = readl(port_mmio + PORT_IRQ_STAT)) != 0) 880 { 854 881 writel(port_mmio + PORT_IRQ_STAT, status); 855 882 } … … 867 894 PORT_IRQ_PIOS_FIS | 868 895 PORT_IRQ_D2H_REG_FIS); 869 } else { 896 } 897 else 898 { 870 899 writel(port_mmio + PORT_IRQ_MASK, 0); 871 900 } … … 881 910 void ahci_start_fis_rx(AD_INFO *ai, int p) 882 911 { 883 u8 _far*port_mmio = port_base(ai, p);912 u8 *port_mmio = port_base(ai, p); 884 913 u32 port_dma = port_dma_base_phys(ai, p); 885 914 u32 tmp; … … 905 934 void ahci_start_engine(AD_INFO *ai, int p) 906 935 { 907 u8 _far*port_mmio = port_base(ai, p);936 u8 *port_mmio = port_base(ai, p); 908 937 u32 tmp; 909 938 … … 920 949 int ahci_stop_port(AD_INFO *ai, int p) 921 950 { 922 u8 _far*port_mmio = port_base(ai, p);951 u8 *port_mmio = port_base(ai, p); 923 952 u32 tmp; 924 953 int rc; 925 954 926 ddprintf("ahci_stop_port %d.%d\n", ad_no(ai), p);955 DPRINTF(3,"ahci_stop_port %d.%d\n", ad_no(ai), p); 927 956 928 957 /* disable port interrupts */ … … 930 959 931 960 /* disable FIS reception */ 932 if ((rc = ahci_stop_fis_rx(ai, p)) != 0) { 933 dprintf("error: failed to stop FIS receive (%d)\n", rc); 961 if ((rc = ahci_stop_fis_rx(ai, p)) != 0) 962 { 963 DPRINTF(0,"error: failed to stop FIS receive (%d)\n", rc); 934 964 return(rc); 935 965 } 936 966 937 967 /* disable command engine */ 938 if ((rc = ahci_stop_engine(ai, p)) != 0) { 939 dprintf("error: failed to stop port HW engine (%d)\n", rc); 968 if ((rc = ahci_stop_engine(ai, p)) != 0) 969 { 970 DPRINTF(0,"error: failed to stop port HW engine (%d)\n", rc); 940 971 return(rc); 941 972 } … … 943 974 /* clear any pending port IRQs */ 944 975 tmp = readl(port_mmio + PORT_IRQ_STAT); 945 if (tmp) { 946 writel(port_mmio + PORT_IRQ_STAT, tmp); 947 } 976 if (tmp) writel(port_mmio + PORT_IRQ_STAT, tmp); 948 977 writel(ai->mmio + HOST_IRQ_STAT, 1UL << p); 949 978 … … 965 994 int ahci_stop_fis_rx(AD_INFO *ai, int p) 966 995 { 967 u8 _far*port_mmio = port_base(ai, p);996 u8 *port_mmio = port_base(ai, p); 968 997 TIMER Timer; 969 998 u32 tmp; … … 977 1006 /* wait for completion, spec says 500ms, give it 1000ms */ 978 1007 status = 0; 979 timer_init(&Timer, 1000); 980 while (readl(port_mmio + PORT_CMD) & PORT_CMD_FIS_ON) { 981 status = timer_check_and_block(&Timer); 1008 TimerInit(&Timer, 1000); 1009 while (readl(port_mmio + PORT_CMD) & PORT_CMD_FIS_ON) 1010 { 1011 status = TimerCheckAndBlock(&Timer); 982 1012 if (status) break; 983 1013 } … … 995 1025 int ahci_stop_engine(AD_INFO *ai, int p) 996 1026 { 997 u8 _far*port_mmio = port_base(ai, p);1027 u8 *port_mmio = port_base(ai, p); 998 1028 TIMER Timer; 999 1029 int status; … … 1003 1033 1004 1034 /* check if the port is already stopped */ 1005 if ((tmp & (PORT_CMD_START | PORT_CMD_LIST_ON)) == 0) { 1006 return 0; 1007 } 1035 if ((tmp & (PORT_CMD_START | PORT_CMD_LIST_ON)) == 0) return 0; 1008 1036 1009 1037 /* set port to idle */ … … 1013 1041 /* wait for engine to stop. This could be as long as 500 msec */ 1014 1042 status = 0; 1015 timer_init(&Timer, 500); 1016 while (readl(port_mmio + PORT_CMD) & PORT_CMD_LIST_ON) { 1017 status = timer_check_and_block(&Timer); 1043 TimerInit(&Timer, 500); 1044 while (readl(port_mmio + PORT_CMD) & PORT_CMD_LIST_ON) 1045 { 1046 status = TimerCheckAndBlock(&Timer); 1018 1047 if (status) break; 1019 1048 } … … 1027 1056 int ahci_port_busy(AD_INFO *ai, int p) 1028 1057 { 1029 u8 _far *port_mmio = port_base(ai, p); 1030 1031 return(readl(port_mmio + PORT_SCR_ACT) != 0 || 1032 readl(port_mmio + PORT_CMD_ISSUE) != 0); 1058 u8 *port_mmio = port_base(ai, p); 1059 1060 return(readl(port_mmio + PORT_SCR_ACT) != 0 || readl(port_mmio + PORT_CMD_ISSUE) != 0); 1033 1061 } 1034 1062 … … 1043 1071 * involve delays), we're going with the spinlock for the time being. 1044 1072 */ 1045 void ahci_exec_iorb(IORBH _far *iorb, int ncq_capable, 1046 int (*func)(IORBH _far *, int)) 1073 void ahci_exec_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int ncq_capable, int (*func)(IORBH FAR16DATA *, IORBH *pIorb, int)) 1047 1074 { 1048 1075 volatile u32 *cmds; 1049 ADD_WORKSPACE _far *aws = add_workspace(iorb);1050 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb);1051 P_INFO *port = ai->ports + iorb_unit_port(iorb);1076 ADD_WORKSPACE *aws = add_workspace(pIorb); 1077 AD_INFO *ai = &ad_infos[iorb_unit_adapter(pIorb)]; 1078 P_INFO *port = &ai->ports[iorb_unit_port(pIorb)]; 1052 1079 ULONG timeout; 1053 u8 _far *port_mmio = port_base(ai, iorb_unit_port(iorb));1080 u8 *port_mmio = port_base(ai, iorb_unit_port(pIorb)); 1054 1081 u16 cmd_max = ai->cmd_max; 1055 1082 int i; 1056 1083 1057 1084 /* determine timeout in milliseconds */ 1058 switch (iorb->Timeout) { 1085 switch (pIorb->Timeout) 1086 { 1059 1087 case 0: 1060 1088 timeout = DEFAULT_TIMEOUT; … … 1064 1092 break; 1065 1093 default: 1066 timeout = iorb->Timeout * 1000;1094 timeout = pIorb->Timeout * 1000; 1067 1095 break; 1068 1096 } 1097 1098 DPRINTF(1,"---------- ahci_exec_iorb: iorb=%x\n", vIorb); 1069 1099 1070 1100 /* Enable AHCI mode; apparently, the AHCI mode may end up becoming … … 1078 1108 /* determine whether this will be an NCQ request */ 1079 1109 aws->is_ncq = 0; 1080 if (ncq_capable && port->devs[iorb_unit_device(iorb)].ncq_max > 1 && 1081 (ai->cap & HOST_CAP_NCQ) && !aws->no_ncq && init_complete) { 1110 if (ncq_capable && port->devs[iorb_unit_device(pIorb)].ncq_max > 1 && 1111 (ai->cap & HOST_CAP_NCQ) && !aws->no_ncq && init_complete) 1112 { 1082 1113 1083 1114 /* We can make this an NCQ request; limit command slots to the maximum … … 1087 1118 */ 1088 1119 aws->is_ncq = 1; 1089 if ((cmd_max = port->devs[iorb_unit_device(iorb)].ncq_max - 1) > ai->cmd_max) { 1120 if ((cmd_max = port->devs[iorb_unit_device(pIorb)].ncq_max - 1) > ai->cmd_max) 1121 { 1090 1122 cmd_max = ai->cmd_max; 1091 1123 } 1092 ddprintf("NCQ command; cmd_max = %d->%d\n", (u16)ai->cmd_max, cmd_max);1124 DPRINTF(3,"NCQ command; cmd_max = %d->%d\n", ai->cmd_max, cmd_max); 1093 1125 } 1094 1126 1095 1127 /* make sure adapter is available */ 1096 1128 spin_lock(drv_lock); 1097 if (!ai->busy) { 1098 1099 if (!init_complete) { 1129 if (!ai->busy) 1130 { 1131 1132 if (!init_complete) 1133 { 1100 1134 /* no IRQ handlers or context hooks availabe at this point */ 1101 1135 ai->busy = 1; 1102 1136 spin_unlock(drv_lock); 1103 ahci_exec_polled_iorb( iorb, func, timeout);1137 ahci_exec_polled_iorb(vIorb, pIorb, func, timeout); 1104 1138 ai->busy = 0; 1105 1139 return; … … 1107 1141 1108 1142 /* make sure we don't mix NCQ and regular commands */ 1109 if (aws->is_ncq && port->reg_cmds == 0 || !aws->is_ncq && port->ncq_cmds == 0) {1110 1143 if (aws->is_ncq && port->reg_cmds == 0 || !aws->is_ncq && port->ncq_cmds == 0) 1144 { 1111 1145 /* Find next available command slot. We use a simple round-robin 1112 1146 * algorithm for this to prevent commands with higher slot indexes … … 1114 1148 */ 1115 1149 cmds = (aws->is_ncq) ? &port->ncq_cmds : &port->reg_cmds; 1116 for (i = 0; i <= cmd_max; i++) { 1117 if (++(port->cmd_slot) > cmd_max) { 1118 port->cmd_slot = 0; 1119 } 1120 if ((*cmds & (1UL << port->cmd_slot)) == 0) { 1121 break; 1122 } 1150 for (i = 0; i <= cmd_max; i++) 1151 { 1152 if (++(port->cmd_slot) > cmd_max) port->cmd_slot = 0; 1153 if ((*cmds & (1UL << port->cmd_slot)) == 0) break; 1123 1154 } 1124 1155 1125 if ((*cmds & (1UL << port->cmd_slot)) == 0) { 1156 if ((*cmds & (1UL << port->cmd_slot)) == 0) 1157 { 1126 1158 /* found idle command slot; prepare command */ 1127 if (func(iorb, port->cmd_slot)) { 1159 if (func(vIorb, pIorb, port->cmd_slot)) 1160 { 1128 1161 /* Command preparation failed, or no HW command required; IORB 1129 1162 * will already have the error code if there was an error. 1130 1163 */ 1131 1164 spin_unlock(drv_lock); 1132 iorb_done( iorb);1165 iorb_done(vIorb, pIorb); 1133 1166 return; 1134 1167 } 1135 1168 1136 1169 /* start timer for this IORB */ 1137 ADD_StartTimerMS(&aws->timer, timeout, (PFN) timeout_callback, iorb, 0);1170 Timer_StartTimerMS(&aws->timer, timeout, timeout_callback, CastFar16ToULONG(vIorb)); 1138 1171 1139 1172 /* issue command to hardware */ … … 1142 1175 aws->cmd_slot = port->cmd_slot; 1143 1176 1144 ddprintf("issuing command on slot %d\n", port->cmd_slot); 1145 if (aws->is_ncq) { 1177 DPRINTF(1,"Issuing command Slot=%d cmds=%x\n", port->cmd_slot, *cmds); 1178 if (aws->is_ncq) 1179 { 1146 1180 writel(port_mmio + PORT_SCR_ACT, (1UL << port->cmd_slot)); 1147 1181 readl(port_mmio + PORT_SCR_ACT); /* flush */ … … 1178 1212 * without the driver-level spinlock held. 1179 1213 */ 1180 void ahci_exec_polled_iorb(IORBH _far *iorb, int (*func)(IORBH _far *, int), 1181 ULONG timeout) 1214 void ahci_exec_polled_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int (*func)(IORBH FAR16DATA *, IORBH *pIorb, int), ULONG timeout) 1182 1215 { 1183 1216 AHCI_PORT_CFG *pc = NULL; 1184 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);1185 int p = iorb_unit_port( iorb);1186 u8 _far*port_mmio = port_base(ai, p);1217 AD_INFO *ai = ad_infos + iorb_unit_adapter(vIorb); 1218 int p = iorb_unit_port(pIorb); 1219 u8 *port_mmio = port_base(ai, p); 1187 1220 TIMER Timer; 1188 1221 int rc; 1189 1222 1190 1223 /* enable AHCI mode */ 1191 if (ahci_enable_ahci(ai) != 0) { 1192 iorb_seterr(iorb, IOERR_ADAPTER_NONSPECIFIC); 1224 if (ahci_enable_ahci(ai) != 0) 1225 { 1226 iorb_seterr(pIorb, IOERR_ADAPTER_NONSPECIFIC); 1193 1227 goto restore_bios_config; 1194 1228 } 1195 1229 1196 1230 /* check whether command slot 0 is available */ 1197 if ((readl(port_mmio + PORT_CMD_ISSUE) & 1) != 0) { 1198 iorb_seterr(iorb, IOERR_DEVICE_BUSY); 1231 if ((readl(port_mmio + PORT_CMD_ISSUE) & 1) != 0) 1232 { 1233 iorb_seterr(pIorb, IOERR_DEVICE_BUSY); 1199 1234 goto restore_bios_config; 1200 1235 } 1201 1236 1202 1237 /* save port configuration */ 1203 if ((pc = ahci_save_port_config(ai, p)) == NULL) { 1204 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 1238 if ((pc = ahci_save_port_config(ai, p)) == NULL) 1239 { 1240 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 1205 1241 goto restore_bios_config; 1206 1242 } 1207 1243 1208 1244 /* restart/reset port (includes the necessary port configuration) */ 1209 if (init_reset) { 1245 if (init_reset) 1246 { 1210 1247 /* As outlined in ahci_restore_bios_config(), switching back and 1211 1248 * forth between SATA and AHCI mode requires a COMRESET to force … … 1214 1251 * starting it. 1215 1252 */ 1216 if (ahci_reset_port(ai, p, 0)) { 1217 iorb_seterr(iorb, IOERR_ADAPTER_NONSPECIFIC); 1253 if (ahci_reset_port(ai, p, 0)) 1254 { 1255 iorb_seterr(pIorb, IOERR_ADAPTER_NONSPECIFIC); 1218 1256 goto restore_bios_config; 1219 1257 } 1220 1258 1221 } else if (ahci_stop_port(ai, p) || ahci_start_port(ai, p, 0)) { 1222 iorb_seterr(iorb, IOERR_ADAPTER_NONSPECIFIC); 1259 } 1260 else if (ahci_stop_port(ai, p) || ahci_start_port(ai, p, 0)) 1261 { 1262 iorb_seterr(pIorb, IOERR_ADAPTER_NONSPECIFIC); 1223 1263 goto restore_bios_config; 1224 1264 } 1225 1265 1226 1266 /* prepare command */ 1227 if (func(iorb, 0) == 0) { 1267 if (func(vIorb, pIorb, 0) == 0) 1268 { 1228 1269 /* successfully prepared cmd; issue cmd and wait for completion */ 1229 ddprintf("executing polled cmd on slot 0...");1270 DPRINTF(3,"---------- executing polled cmd on slot 0..."); 1230 1271 writel(port_mmio + PORT_CMD_ISSUE, 1); 1231 timer_init(&Timer, timeout); 1232 while (readl(port_mmio + PORT_CMD_ISSUE) & 1) { 1233 rc = timer_check_and_block(&Timer); 1272 TimerInit(&Timer, timeout); 1273 while (readl(port_mmio + PORT_CMD_ISSUE) & 1) 1274 { 1275 rc = TimerCheckAndBlock(&Timer); 1234 1276 if (rc) break; 1235 1277 } 1236 1278 1237 1279 /* 0x89 = BSY(0x80) | DRQ(0x08) | ERR(0x01) */ 1238 if (rc) { 1239 dprintf(" timeout for IORB %Fp", iorb); 1240 iorb_seterr(iorb, IOERR_ADAPTER_TIMEOUT); 1241 } else if (readl(port_mmio + PORT_SCR_ERR) != 0 || readl(port_mmio + PORT_TFDATA) & 0x89) { 1242 dprintf(" polled cmd error for IORB %Fp", iorb); 1243 iorb_seterr(iorb, IOERR_DEVICE_NONSPECIFIC); 1244 ahci_reset_port(ai, iorb_unit_port(iorb), 0); 1245 } else { 1280 if (rc) 1281 { 1282 DPRINTF(3," timeout for IORB %x", vIorb); 1283 iorb_seterr(pIorb, IOERR_ADAPTER_TIMEOUT); 1284 } 1285 else if (readl(port_mmio + PORT_SCR_ERR) != 0 || readl(port_mmio + PORT_TFDATA) & 0x89) 1286 { 1287 DPRINTF(3," polled cmd error for IORB %x", vIorb); 1288 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 1289 ahci_reset_port(ai, iorb_unit_port(pIorb), 0); 1290 } 1291 else 1292 { 1246 1293 /* successfully executed command */ 1247 if (add_workspace(iorb)->ppfunc != NULL) { 1248 add_workspace(iorb)->ppfunc(iorb); 1249 } else { 1250 add_workspace(iorb)->complete = 1; 1294 if (add_workspace(pIorb)->ppfunc != NULL) 1295 { 1296 add_workspace(pIorb)->ppfunc(vIorb, pIorb); 1251 1297 } 1252 } 1253 ddprintf("\n"); 1298 else 1299 { 1300 add_workspace(pIorb)->complete = 1; 1301 } 1302 } 1303 DPRINTF(3,"\n"); 1254 1304 } 1255 1305 1256 1306 restore_bios_config: 1257 1307 /* restore BIOS configuration */ 1258 if (pc != NULL) { 1308 if (pc != NULL) 1309 { 1259 1310 ahci_restore_port_config(ai, p, pc); 1260 1311 } 1261 1312 ahci_restore_bios_config(ai); 1262 1313 1263 if (add_workspace(iorb)->complete | (iorb->Status | IORB_ERROR)) { 1264 iorb_done(iorb); 1314 if (add_workspace(pIorb)->complete | (pIorb->Status | IORB_ERROR)) 1315 { 1316 iorb_done(vIorb, pIorb); 1265 1317 } 1266 1318 return; … … 1279 1331 { 1280 1332 va_list va; 1281 u8 _far*port_mmio = port_base(ai, p);1333 u8 *port_mmio = port_base(ai, p); 1282 1334 u32 tmp; 1283 1335 int rc; … … 1285 1337 1286 1338 /* verify that command slot 0 is idle */ 1287 if (readl(port_mmio + PORT_CMD_ISSUE) & 1) { 1288 ddprintf("port %d slot 0 is not idle; not executing polled cmd\n", p); 1339 if (readl(port_mmio + PORT_CMD_ISSUE) & 1) 1340 { 1341 DPRINTF(3,"port %d slot 0 is not idle; not executing polled cmd\n", p); 1289 1342 return(-1); 1290 1343 } … … 1292 1345 /* fill in command slot 0 */ 1293 1346 va_start(va, cmd); 1294 if ((rc = v_ata_cmd(ai, p, d, 0, cmd, va)) != 0) { 1295 return(rc); 1296 } 1347 if ((rc = v_ata_cmd(ai, p, d, 0, cmd, va)) != 0) return(rc); 1297 1348 1298 1349 /* start command execution for slot 0 */ 1299 ddprintf("executing polled cmd...");1350 DPRINTF(3,"---------- executing polled cmd..."); 1300 1351 writel(port_mmio + PORT_CMD_ISSUE, 1); 1301 1352 1302 1353 /* wait until command has completed */ 1303 timer_init(&Timer, timeout);1354 TimerInit(&Timer, timeout); 1304 1355 rc = 0; 1305 while (readl(port_mmio + PORT_CMD_ISSUE) & 1) { 1306 rc = timer_check_and_block(&Timer); 1307 if (rc) { 1308 dprintf(" Timeout"); 1356 while (readl(port_mmio + PORT_CMD_ISSUE) & 1) 1357 { 1358 rc = TimerCheckAndBlock(&Timer); 1359 if (rc) 1360 { 1361 DPRINTF(2," Timeout"); 1309 1362 break; 1310 1363 } … … 1312 1365 1313 1366 tmp = readl(port_mmio + PORT_SCR_ERR); 1314 if (tmp & PORT_ERR_FAIL_BITS) { 1315 dprintf(" SERR = 0x%08lx", tmp); 1367 if (tmp & PORT_ERR_FAIL_BITS) 1368 { 1369 DPRINTF(2," SERR = 0x%08lx", tmp); 1316 1370 rc = 1; 1317 1371 } 1318 1372 /* 0x89 = BSY(0x80) | DRQ(0x08) | ERR(0x01) */ 1319 if (((tmp = readl(port_mmio + PORT_TFDATA)) & 0x89) != 0) { 1320 dprintf(" TFDATA = 0x%08lx", tmp); 1373 if (((tmp = readl(port_mmio + PORT_TFDATA)) & 0x89) != 0) 1374 { 1375 DPRINTF(2," TFDATA = 0x%08lx", tmp); 1321 1376 rc = 1; 1322 1377 } 1323 1378 1324 if (rc) { 1325 ddprintf("failed\n"); 1379 if (rc) 1380 { 1381 DPRINTF(3,"failed\n"); 1326 1382 ahci_reset_port(ai, p, 0); 1327 1383 return(-1); 1328 1384 } 1329 ddprintf("success\n");1385 DPRINTF(3,"success\n"); 1330 1386 return(0); 1331 1387 } … … 1342 1398 int ahci_flush_cache(AD_INFO *ai, int p, int d) 1343 1399 { 1344 if (!ai->ports[p].devs[d].atapi) { 1345 dprintf("flushing cache on %d.%d.%d\n", ad_no(ai), p, d); 1400 if (!ai->ports[p].devs[d].atapi) 1401 { 1402 DPRINTF(2,"flushing cache on %d.%d.%d\n", ad_no(ai), p, d); 1346 1403 return(ahci_exec_polled_cmd(ai, p, d, 30000, 1347 1404 ai->ports[p].devs[d].lba48 ? ATA_CMD_FLUSH_EXT : ATA_CMD_FLUSH, AP_END)); … … 1360 1417 int ahci_set_dev_idle(AD_INFO *ai, int p, int d, int idle) 1361 1418 { 1362 ddprintf("sending IDLE=%d command to port %d\n", idle, p); 1363 return ahci_exec_polled_cmd(ai, p, d, 500, ATA_CMD_IDLE, AP_COUNT, 1364 idle ? 1 : 0, AP_END); 1419 DPRINTF(3,"sending IDLE=%d command to port %d\n", idle, p); 1420 return ahci_exec_polled_cmd(ai, p, d, 500, ATA_CMD_IDLE, AP_COUNT, idle ? 1 : 0, AP_END); 1365 1421 } 1366 1422 … … 1375 1431 * the driver-level spinlock when actually changing the driver state (IORB 1376 1432 * queues, ...) 1377 * 1378 * NOTE: OS/2 expects the carry flag set upon return from an interrupt 1379 * handler if the interrupt has not been handled. We do this by 1380 * shifting the return code from this function one bit to the right, 1381 * thus the return code must set bit 0 in this case. 1382 */ 1433 */ 1434 u32 DazCount = 0; 1435 1383 1436 int ahci_intr(u16 irq) 1384 1437 { … … 1388 1441 int p; 1389 1442 1443 DPRINTF(1,"AI=%x",DazCount++); 1444 1390 1445 /* find adapter(s) with pending interrupts */ 1391 for (a = 0; a < ad_info_cnt; a++) { 1446 for (a = 0; a < ad_info_cnt; a++) 1447 { 1392 1448 AD_INFO *ai = ad_infos + a; 1393 1449 1394 if (ai->irq == irq && (irq_stat = readl(ai->mmio + HOST_IRQ_STAT)) != 0) { 1450 if (ai->irq == irq && (irq_stat = readl(ai->mmio + HOST_IRQ_STAT)) != 0) 1451 { 1395 1452 /* this adapter has interrupts pending */ 1396 1453 u32 irq_masked = irq_stat & ai->port_map; 1397 1454 1398 for (p = 0; p <= ai->port_max; p++) { 1399 if (irq_masked & (1UL << p)) { 1455 for (p = 0; p <= ai->port_max; p++) 1456 { 1457 if (irq_masked & (1UL << p)) 1458 { 1400 1459 ahci_port_intr(ai, p); 1401 1460 } … … 1409 1468 } 1410 1469 1411 if (handled) { 1470 if (handled) 1471 { 1412 1472 /* Trigger state machine to process next IORBs, if any. Due to excessive 1413 1473 * IORB requeue operations (e.g. when processing large unaligned reads or 1414 1474 * writes), we may be stacking interrupts on top of each other. If we 1415 1475 * detect this, we'll pass this on to the engine context hook. 1416 *1417 * Rousseau:1418 * The "Physycal Device Driver Reference" states that it's a good idea1419 * to disable interrupts before doing EOI so that it can proceed for this1420 * level without being interrupted, which could cause stacked interrupts,1421 * possibly exhausting the interrupt stack.1422 * (?:\IBMDDK\DOCS\PDDREF.INF->Device Helper (DevHlp) Services)->EOI)1423 *1424 * This is what seemed to happen when running in VirtualBox.1425 * Since in VBox the AHCI-controller is a software implementation, it is1426 * just not fast enough to handle a large bulk of requests, like when JFS1427 * flushes it's caches.1428 *1429 * Cross referencing with DANIS506 shows she does the same in the1430 * state-machine code in s506sm.c around line 244; disable interrupts1431 * before doing the EOI.1432 *1433 * Comments on the disable() function state that SMP systems should use1434 * a spinlock, but putting the EOI before spin_unlock() did not solve the1435 * VBox ussue. This is probably because spin_unlock() enables interrupts,1436 * which implies we need to return from this handler with interrupts1437 * disabled.1438 1476 */ 1439 if ((u16) (u32) (void _far *) &irq_stat < 0xf000) { 1440 ddprintf("IRQ stack running low; arming engine context hook\n"); 1477 #if 0 1478 if ((u32)&irq_stat < 0xf000) 1479 { 1480 DPRINTF(0,"IRQ stack running low; arming engine context hook\n"); 1441 1481 /* Rousseau: 1442 1482 * A context hook cannot be re-armed before it has completed. … … 1453 1493 * This needs some more investigation. 1454 1494 */ 1455 DevHelp_ArmCtxHook(0, engine_ctxhook_h); 1456 } else { 1495 KernArmHook(engine_ctxhook_h, 0, 0); 1496 } 1497 else 1498 #endif 1499 { 1457 1500 spin_lock(drv_lock); 1458 1501 trigger_engine(); 1459 1502 spin_unlock(drv_lock); 1460 1503 } 1461 /* disable interrupts to prevent stacking. (See comments above) */ 1462 disable(); 1463 /* complete the interrupt */ 1464 DevHelp_EOI(irq); 1465 return(0); 1466 } else { 1467 return(1); 1468 } 1504 DevCli(); 1505 Dev32Help_EOI(irq); 1506 return(1); /* handled */ 1507 } 1508 1509 return(0); /* not handled */ 1469 1510 } 1470 1511 … … 1477 1518 { 1478 1519 IORB_QUEUE done_queue; 1479 IORBH _far *iorb;1480 IORBH _far *next = NULL;1481 u8 _far*port_mmio = port_base(ai, p);1520 IORBH FAR16DATA *vIorb; 1521 IORBH FAR16DATA *vNext = NULL; 1522 u8 *port_mmio = port_base(ai, p); 1482 1523 u32 irq_stat; 1483 1524 u32 active_cmds; … … 1489 1530 readl(port_mmio + PORT_IRQ_STAT); /* flush */ 1490 1531 1491 ddprintf("port interrupt for adapter %d port %d stat %lx stack frame %Fp\n", 1492 ad_no(ai), p, irq_stat, (void _far *)&done_queue); 1532 DPRINTF(3,"port interrupt A=%d Port=%d stat=%x\n", ad_no(ai), p, irq_stat); 1493 1533 memset(&done_queue, 0x00, sizeof(done_queue)); 1494 1534 1495 if (irq_stat & PORT_IRQ_ERROR) { 1535 if (irq_stat & PORT_IRQ_ERROR) 1536 { 1496 1537 /* this is an error interrupt; 1497 1538 * disable port interrupts to avoid IRQ storm until error condition … … 1511 1552 * commands have completed, too. 1512 1553 */ 1513 if (ai->ports[p].ncq_cmds != 0) { 1554 if (ai->ports[p].ncq_cmds != 0) 1555 { 1514 1556 active_cmds = readl(port_mmio + PORT_SCR_ACT); 1515 1557 done_mask = ai->ports[p].ncq_cmds ^ active_cmds; 1516 ddprintf("[ncq_cmds]: active_cmds = 0x%08lx, done_mask = 0x%08lx\n", 1517 active_cmds, done_mask); 1518 } else { 1558 DPRINTF(1,"[ncq_cmds]: active_cmds=0x%08x done_mask=0x%08x\n", active_cmds, done_mask); 1559 } 1560 else 1561 { 1519 1562 active_cmds = readl(port_mmio + PORT_CMD_ISSUE); 1520 1563 done_mask = ai->ports[p].reg_cmds ^ active_cmds; 1521 ddprintf("[reg_cmds]: active_cmds = 0x%08lx, done_mask = 0x%08lx\n", 1522 active_cmds, done_mask); 1564 DPRINTF(1,"[reg_cmds]: active_cmds=0x%08x done_mask=0x%08x\n", active_cmds, done_mask); 1523 1565 } 1524 1566 … … 1535 1577 * processed after releasing the spinlock. 1536 1578 */ 1537 for (iorb = ai->ports[p].iorb_queue.root; iorb != NULL; iorb = next) { 1538 ADD_WORKSPACE _far *aws = (ADD_WORKSPACE _far *) &iorb->ADDWorkSpace; 1539 next = iorb->pNxtIORB; 1540 if (aws->queued_hw && (done_mask & (1UL << aws->cmd_slot))) { 1579 for (vIorb = ai->ports[p].iorb_queue.vRoot; vIorb != NULL; vIorb = vNext) 1580 { 1581 IORBH *pIorb = Far16ToFlat(vIorb); 1582 ADD_WORKSPACE *aws = (ADD_WORKSPACE *) &pIorb->ADDWorkSpace; 1583 1584 vNext = pIorb->pNxtIORB; 1585 if (aws->queued_hw && (done_mask & (1UL << aws->cmd_slot))) 1586 { 1541 1587 /* this hardware command has completed */ 1542 1588 ai->ports[p].ncq_cmds &= ~(1UL << aws->cmd_slot); … … 1544 1590 1545 1591 /* call post-processing function, if any */ 1546 if (aws->ppfunc != NULL) { 1547 aws->ppfunc(iorb); 1548 } else { 1549 aws->complete = 1; 1592 if (aws->ppfunc != NULL) aws->ppfunc(vIorb, pIorb); 1593 else aws->complete = 1; 1594 1595 if (aws->complete) 1596 { 1597 /* this IORB is complete; move IORB to our temporary done queue */ 1598 iorb_queue_del(&ai->ports[p].iorb_queue, vIorb); 1599 iorb_queue_add(&done_queue, vIorb, pIorb); 1600 aws_free(add_workspace(pIorb)); 1550 1601 } 1551 1552 if (aws->complete) {1553 /* this IORB is complete; move IORB to our temporary done queue */1554 iorb_queue_del(&ai->ports[p].iorb_queue, iorb);1555 iorb_queue_add(&done_queue, iorb);1556 aws_free(add_workspace(iorb));1557 }1558 1602 } 1559 1603 } … … 1562 1606 1563 1607 /* complete all IORBs in the done queue */ 1564 for (iorb = done_queue.root; iorb != NULL; iorb = next) { 1565 next = iorb->pNxtIORB; 1566 iorb_complete(iorb); 1608 for (vIorb = done_queue.vRoot; vIorb != NULL; vIorb = vNext) 1609 { 1610 IORBH *pIorb = Far16ToFlat(vIorb); 1611 1612 vNext = pIorb->pNxtIORB; 1613 iorb_complete(vIorb, pIorb); 1567 1614 } 1568 1615 } … … 1588 1635 * reset, or worse. 1589 1636 */ 1590 if (irq_stat & PORT_IRQ_UNK_FIS) { 1637 if (irq_stat & PORT_IRQ_UNK_FIS) 1638 { 1591 1639 #ifdef DEBUG 1592 u32 _far *unk = (u32 _far*) (port_dma_base(ai, p)->rx_fis + RX_FIS_UNK);1593 dprintf("warning: unknown FIS %08lx %08lx %08lx %08lx\n", unk[0], unk[1], unk[2], unk[3]);1640 u32 *unk = (u32 *) (port_dma_base(ai, p)->rx_fis + RX_FIS_UNK); 1641 DPRINTF(0,"warning: unknown FIS %08lx %08lx %08lx %08lx\n", unk[0], unk[1], unk[2], unk[3]); 1594 1642 #endif 1595 1643 reset_port = 1; 1596 1644 } 1597 if (irq_stat & (PORT_IRQ_HBUS_ERR | PORT_IRQ_HBUS_DATA_ERR)) { 1598 dprintf("warning: host bus [data] error for port #%d\n", p); 1645 if (irq_stat & (PORT_IRQ_HBUS_ERR | PORT_IRQ_HBUS_DATA_ERR)) 1646 { 1647 DPRINTF(0,"warning: host bus [data] error for port #%d\n", p); 1599 1648 reset_port = 1; 1600 1649 } 1601 if (irq_stat & PORT_IRQ_IF_ERR && !(ai->flags & AHCI_HFLAG_IGN_IRQ_IF_ERR)) { 1602 dprintf("warning: interface fatal error for port #%d\n", p); 1650 if (irq_stat & PORT_IRQ_IF_ERR && !(ai->flags & AHCI_HFLAG_IGN_IRQ_IF_ERR)) 1651 { 1652 DPRINTF(0,"warning: interface fatal error for port #%d\n", p); 1603 1653 reset_port = 1; 1604 1654 } 1605 if (reset_port) { 1655 if (reset_port) 1656 { 1606 1657 /* need to reset the port; leave this to the reset context hook */ 1607 1658 1608 1659 ports_to_reset[ad_no(ai)] |= 1UL << p; 1609 DevHelp_ArmCtxHook(0, reset_ctxhook_h);1660 KernArmHook(reset_ctxhook_h, 0, 0); 1610 1661 1611 1662 /* no point analyzing device errors after a reset... */ … … 1613 1664 } 1614 1665 1615 dprintf("port #%d interrupt error status: 0x%08lx; restarting port\n", 1616 p, irq_stat); 1666 DPRINTF(0,"port #%d interrupt error status: 0x%08lx; restarting port\n", p, irq_stat); 1617 1667 1618 1668 /* Handle device-specific errors. Those errors typically involve restarting … … 1621 1671 */ 1622 1672 ports_to_restart[ad_no(ai)] |= 1UL << p; 1623 DevHelp_ArmCtxHook(0, restart_ctxhook_h);1673 KernArmHook(restart_ctxhook_h, 0, 0); 1624 1674 } 1625 1675 … … 1628 1678 * the same for non-removable devices. 1629 1679 */ 1630 void ahci_get_geometry(IORBH _far *iorb) 1631 { 1632 dprintf("ahci_get_geometry(%d.%d.%d)\n", (int) iorb_unit_adapter(iorb), 1633 (int) iorb_unit_port(iorb), (int) iorb_unit_device(iorb)); 1634 1635 ahci_exec_iorb(iorb, 0, cmd_func(iorb, get_geometry)); 1680 void ahci_get_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1681 { 1682 #ifdef DEBUG 1683 DPRINTF(2,"ahci_get_geometry(%d.%d.%d)\n", iorb_unit_adapter(pIorb), 1684 iorb_unit_port(pIorb), iorb_unit_device(pIorb)); 1685 #endif 1686 1687 ahci_exec_iorb(vIorb, pIorb, 0, cmd_func(pIorb, get_geometry)); 1636 1688 } 1637 1689 … … 1639 1691 * Test whether unit is ready. 1640 1692 */ 1641 void ahci_unit_ready(IORBH _far *iorb) 1642 { 1643 dprintf("ahci_unit_ready(%d.%d.%d)\n", (int) iorb_unit_adapter(iorb), 1644 (int) iorb_unit_port(iorb), (int) iorb_unit_device(iorb)); 1645 1646 ahci_exec_iorb(iorb, 0, cmd_func(iorb, unit_ready)); 1693 void ahci_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1694 { 1695 #ifdef DEBUG 1696 DPRINTF(2,"ahci_unit_ready(%d.%d.%d)\n", iorb_unit_adapter(pIorb), 1697 iorb_unit_port(pIorb), iorb_unit_device(pIorb)); 1698 #endif 1699 1700 ahci_exec_iorb(vIorb, pIorb, 0, cmd_func(pIorb, unit_ready)); 1647 1701 } 1648 1702 … … 1650 1704 * Read sectors from AHCI device. 1651 1705 */ 1652 void ahci_read(IORBH _far *iorb) 1653 { 1654 dprintf("ahci_read(%d.%d.%d, %ld, %ld)\n", (int) iorb_unit_adapter(iorb), 1655 (int) iorb_unit_port(iorb), (int) iorb_unit_device(iorb), 1656 (long) ((IORB_EXECUTEIO _far *) iorb)->RBA, 1657 (long) ((IORB_EXECUTEIO _far *) iorb)->BlockCount); 1658 1659 ahci_exec_iorb(iorb, 1, cmd_func(iorb, read)); 1706 void ahci_read(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1707 { 1708 #ifdef DEBUG 1709 DPRINTF(2,"ahci_read(%d.%d.%d, %d, %d)\n", iorb_unit_adapter(vIorb), 1710 iorb_unit_port(pIorb), iorb_unit_device(pIorb), 1711 ((IORB_EXECUTEIO *) pIorb)->RBA, 1712 ((IORB_EXECUTEIO *) pIorb)->BlockCount); 1713 #endif 1714 1715 ahci_exec_iorb(vIorb, pIorb, 1, cmd_func(pIorb, read)); 1660 1716 } 1661 1717 … … 1663 1719 * Verify readability of sectors on AHCI device. 1664 1720 */ 1665 void ahci_verify(IORBH _far *iorb) 1666 { 1667 dprintf("ahci_verify(%d.%d.%d, %ld, %ld)\n", (int) iorb_unit_adapter(iorb), 1668 (int) iorb_unit_port(iorb), (int) iorb_unit_device(iorb), 1669 (long) ((IORB_EXECUTEIO _far *) iorb)->RBA, 1670 (long) ((IORB_EXECUTEIO _far *) iorb)->BlockCount); 1671 1672 ahci_exec_iorb(iorb, 0, cmd_func(iorb, verify)); 1721 void ahci_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1722 { 1723 #ifdef DEBUG 1724 DPRINTF(2,"ahci_verify(%d.%d.%d, %d, %d)\n", iorb_unit_adapter(pIorb), 1725 iorb_unit_port(pIorb), iorb_unit_device(pIorb), 1726 ((IORB_EXECUTEIO *)pIorb)->RBA, 1727 ((IORB_EXECUTEIO *)pIorb)->BlockCount); 1728 #endif 1729 1730 ahci_exec_iorb(vIorb, pIorb, 0, cmd_func(pIorb, verify)); 1673 1731 } 1674 1732 … … 1676 1734 * Write sectors to AHCI device. 1677 1735 */ 1678 void ahci_write(IORBH _far *iorb) 1679 { 1680 dprintf("ahci_write(%d.%d.%d, %ld, %ld)\n", (int) iorb_unit_adapter(iorb), 1681 (int) iorb_unit_port(iorb), (int) iorb_unit_device(iorb), 1682 (long) ((IORB_EXECUTEIO _far *) iorb)->RBA, 1683 (long) ((IORB_EXECUTEIO _far *) iorb)->BlockCount); 1684 1685 ahci_exec_iorb(iorb, 1, cmd_func(iorb, write)); 1736 void ahci_write(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1737 { 1738 #ifdef DEBUG 1739 DPRINTF(2,"ahci_write(%d.%d.%d, %d, %d)\n", iorb_unit_adapter(pIorb), 1740 iorb_unit_port(pIorb), iorb_unit_device(pIorb), 1741 ((IORB_EXECUTEIO *)pIorb)->RBA, 1742 ((IORB_EXECUTEIO *)pIorb)->BlockCount); 1743 #endif 1744 1745 ahci_exec_iorb(vIorb, pIorb, 1, cmd_func(pIorb, write)); 1686 1746 } 1687 1747 … … 1689 1749 * Execute SCSI (ATAPI) command. 1690 1750 */ 1691 void ahci_execute_cdb(IORBH _far *iorb)1692 { 1693 int a = iorb_unit_adapter( iorb);1694 int p = iorb_unit_port( iorb);1695 int d = iorb_unit_device( iorb);1696 1697 dphex(((IORB_ADAPTER_PASSTHRU _far *) iorb)->pControllerCmd,1698 ((IORB_ADAPTER_PASSTHRU _far *) iorb)->ControllerCmdLen,1751 void ahci_execute_cdb(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1752 { 1753 int a = iorb_unit_adapter(pIorb); 1754 int p = iorb_unit_port(pIorb); 1755 int d = iorb_unit_device(pIorb); 1756 1757 DHEXDUMP(0,Far16ToFlat(((IORB_ADAPTER_PASSTHRU *)pIorb)->pControllerCmd), 1758 ((IORB_ADAPTER_PASSTHRU *)pIorb)->ControllerCmdLen, 1699 1759 "ahci_execute_cdb(%d.%d.%d): ", a, p, d); 1700 1760 1701 if (ad_infos[a].ports[p].devs[d].atapi) { 1702 ahci_exec_iorb(iorb, 0, atapi_execute_cdb); 1703 } else { 1704 iorb_seterr(iorb, IOERR_CMD_NOT_SUPPORTED); 1705 iorb_done(iorb); 1761 if (ad_infos[a].ports[p].devs[d].atapi) 1762 { 1763 ahci_exec_iorb(vIorb, pIorb, 0, atapi_execute_cdb); 1764 } 1765 else 1766 { 1767 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 1768 iorb_done(vIorb, pIorb); 1706 1769 } 1707 1770 } … … 1711 1774 * ATAPI devices because ATAPI devices will process some ATA commands as well. 1712 1775 */ 1713 void ahci_execute_ata(IORBH _far *iorb)1776 void ahci_execute_ata(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1714 1777 { 1715 1778 #ifdef DEBUG 1716 int a = iorb_unit_adapter(iorb); 1717 int p = iorb_unit_port(iorb); 1718 int d = iorb_unit_device(iorb); 1779 int a = iorb_unit_adapter(pIorb); 1780 int p = iorb_unit_port(pIorb); 1781 int d = iorb_unit_device(pIorb); 1782 1783 DHEXDUMP(0,Far16ToFlat(((IORB_ADAPTER_PASSTHRU *)pIorb)->pControllerCmd), 1784 ((IORB_ADAPTER_PASSTHRU *)pIorb)->ControllerCmdLen, 1785 "ahci_execute_ata(%d.%d.%d): ", a, p, d); 1719 1786 #endif 1720 1787 1721 dphex(((IORB_ADAPTER_PASSTHRU _far *) iorb)->pControllerCmd, 1722 ((IORB_ADAPTER_PASSTHRU _far *) iorb)->ControllerCmdLen, 1723 "ahci_execute_ata(%d.%d.%d): ", a, p, d); 1724 1725 ahci_exec_iorb(iorb, 0, ata_execute_ata); 1788 ahci_exec_iorb(vIorb, pIorb, 0, ata_execute_ata); 1726 1789 } 1727 1790 … … 1744 1807 if (d >= AHCI_MAX_DEVS) return; 1745 1808 1746 if (ai->port_max < p) { 1747 ai->port_max = p; 1748 } 1749 if (ai->ports[p].dev_max < d) { 1750 ai->ports[p].dev_max = d; 1751 } 1809 if (ai->port_max < p) ai->port_max = p; 1810 if (ai->ports[p].dev_max < d) ai->ports[p].dev_max = d; 1752 1811 memset(ai->ports[p].devs + d, 0x00, sizeof(*ai->ports[p].devs)); 1753 1812 … … 1757 1816 ai->ports[p].devs[d].dev_type = UIB_TYPE_DISK; 1758 1817 1759 if (id_buf[ATA_ID_CONFIG] & 0x8000U) { 1818 if (id_buf[ATA_ID_CONFIG] & 0x8000U) 1819 { 1760 1820 /* this is an ATAPI device; augment device information */ 1761 1821 ai->ports[p].devs[d].atapi = 1; … … 1764 1824 ai->ports[p].devs[d].ncq_max = 1; 1765 1825 1766 } else { 1826 } 1827 else 1828 { 1767 1829 /* complete ATA-specific device information */ 1768 if (enable_ncq[ad_no(ai)][p]) { 1830 if (enable_ncq[ad_no(ai)][p]) 1831 { 1769 1832 ai->ports[p].devs[d].ncq_max = id_buf[ATA_ID_QUEUE_DEPTH] & 0x001fU; 1770 1833 } 1771 if (ai->ports[p].devs[d].ncq_max < 1) { 1834 if (ai->ports[p].devs[d].ncq_max < 1) 1835 { 1772 1836 /* NCQ not enabled for this device, or device doesn't support NCQ */ 1773 1837 ai->ports[p].devs[d].ncq_max = 1; 1774 1838 } 1775 if (id_buf[ATA_ID_CFS_ENABLE_2] & 0x0400U) { 1839 if (id_buf[ATA_ID_CFS_ENABLE_2] & 0x0400U) 1840 { 1776 1841 ai->ports[p].devs[d].lba48 = 1; 1777 1842 } 1778 1843 } 1779 1844 1780 dprintf("found device %d.%d.%d: removable = %d, dev_type = %d, atapi = %d, "1845 DPRINTF(2,"found device %d.%d.%d: removable = %d, dev_type = %d, atapi = %d, " 1781 1846 "ncq_max = %d\n", ad_no(ai), p, d, 1782 1847 ai->ports[p].devs[d].removable, … … 1798 1863 * we distinguish only HDs and CD drives for now 1799 1864 */ 1800 if (ai->ports[p].devs[d].removable) { 1865 if (ai->ports[p].devs[d].removable) 1866 { 1801 1867 sprintf(dev_name, RM_CD_PREFIX "%s", p, d, ata_dev_name(id_buf)); 1802 } else { 1868 } 1869 else 1870 { 1803 1871 sprintf(dev_name, RM_HD_PREFIX "%s", p, d, ata_dev_name(id_buf)); 1804 1872 } … … 1816 1884 /* try to detect virtualbox environment to enable a hack for IRQ routing */ 1817 1885 if (ai == ad_infos && ai->pci_vendor == 0x8086 && ai->pci_device == 0x2829 && 1818 !memcmp(ata_dev_name(id_buf), "VBOX HARDDISK", 13)) { 1886 !memcmp(ata_dev_name(id_buf), "VBOX HARDDISK", 13)) 1887 { 1819 1888 /* running inside virtualbox */ 1820 1889 pci_hack_virtualbox(); -
trunk/src/os2ahci/ahci.h
r174 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen -
trunk/src/os2ahci/apm.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Portions copyright (c) 2013-201 5David Azarewicz6 * Portions copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 34 34 35 35 #include <apmcalls.h> 36 USHORT _ far _cdecl apm_event (APMEVENT _far*evt);36 USHORT _cdecl apm_event (APMEVENT *evt); 37 37 38 38 /****************************************************************************** … … 45 45 /* connect to APM driver */ 46 46 if ((rc = APMAttach()) != 0) { 47 dprintf("couldn't connect to APM driver (rc = %d)\n", rc);47 DPRINTF(2,"couldn't connect to APM driver (rc = %d)\n", rc); 48 48 return; 49 49 } … … 53 53 APM_NOTIFYNORMRESUME | 54 54 APM_NOTIFYCRITRESUME, 0)) != 0) { 55 dprintf("couldn't register for power event notificatins (rc = %d)\n", rc);55 DPRINTF(2,"couldn't register for power event notificatins (rc = %d)\n", rc); 56 56 return; 57 57 } … … 61 61 * APM event handler 62 62 */ 63 USHORT _ far _cdecl apm_event(APMEVENT _far*evt)63 USHORT _cdecl apm_event(APMEVENT *evt) 64 64 { 65 65 USHORT msg = (USHORT) evt->ulParm1; 66 66 67 dprintf("received APM event: 0x%lx/0x%lx\n");67 DPRINTF(2,"received APM event: 0x%x/0x%x\n"); 68 68 69 69 switch (msg) { … … 83 83 84 84 default: 85 dprintf("unknown APM event; ignoring...\n");85 DPRINTF(2,"unknown APM event; ignoring...\n"); 86 86 break; 87 87 } … … 103 103 104 104 if (suspended) return; 105 dprintf("suspend()\n");105 DPRINTF(2,"suspend()\n"); 106 106 107 107 /* restart all ports with interrupts disabled */ … … 112 112 for (p = 0; p <= ai->port_max; p++) { 113 113 /* wait until all active commands have completed on this port */ 114 timer_init(&Timer, 250);114 TimerInit(&Timer, 250); 115 115 while (ahci_port_busy(ai, p)) { 116 if ( timer_check_and_block(&Timer)) break;116 if (TimerCheckAndBlock(&Timer)) break; 117 117 } 118 118 … … 142 142 143 143 suspended = 1; 144 dprintf("suspend() finished\n");144 DPRINTF(2,"suspend() finished\n"); 145 145 } 146 146 … … 154 154 155 155 if (!suspended) return; 156 dprintf("resume()\n");156 DPRINTF(2,"resume()\n"); 157 157 158 158 for (a = 0; a < ad_info_cnt; a++) { … … 188 188 */ 189 189 resume_sleep_flag = 5000; 190 DevHelp_ArmCtxHook(0, engine_ctxhook_h);191 192 dprintf("resume() finished\n");190 KernArmHook(engine_ctxhook_h, 0, 0); 191 192 DPRINTF(2,"resume() finished\n"); 193 193 } 194 194 … … 207 207 //int d; 208 208 209 dprintf("shutdown_driver() enter\n"); 210 211 for (a = 0; a < ad_info_cnt; a++) { 209 DPRINTF(1,"shutdown_driver() enter\n"); 210 211 for (a = 0; a < ad_info_cnt; a++) 212 { 212 213 AD_INFO *ai = ad_infos + a; 213 214 … … 217 218 for (i=0; i<50000 && ai->busy; i++) udelay(1000); 218 219 219 for (p = 0; p <= ai->port_max; p++) { 220 u8 _far *port_mmio = port_base(ai, p); 220 for (p = 0; p <= ai->port_max; p++) 221 { 222 u8 *port_mmio = port_base(ai, p); 221 223 222 224 /* Wait up to 50ms for port to go not busy. Again stop it … … 246 248 247 249 /* flush cache on all attached devices */ 248 for (d = 0; d <= ai->ports[p].dev_max; d++) { 249 if (ai->ports[p].devs[d].present) { 250 for (d = 0; d <= ai->ports[p].dev_max; d++) 251 { 252 if (ai->ports[p].devs[d].present) 253 { 250 254 ahci_flush_cache(ai, p, d); 251 255 } … … 258 262 259 263 /* restore BIOS configuration for each adapter */ 260 for (a = 0; a < ad_info_cnt; a++) { 264 for (a = 0; a < ad_info_cnt; a++) 265 { 261 266 ahci_restore_bios_config(ad_infos + a); 262 267 } 263 268 264 dprintf("shutdown_driver() finished\n");265 } 266 269 DPRINTF(1,"shutdown_driver() finished\n"); 270 } 271 -
trunk/src/os2ahci/ata.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Portions copyright (c) 2013-2015David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 35 35 /* -------------------------- function prototypes -------------------------- */ 36 36 37 static int ata_cmd_read (IORBH _far *iorb, AD_INFO *ai, int p, int d, int slot,38 ULONG sector, ULONG count, SCATGATENTRY _far*sg_list,39 40 41 static int ata_cmd_write(IORBH _far *iorb, AD_INFO *ai, int p, int d, int slot,42 ULONG sector, ULONG count, SCATGATENTRY _far*sg_list,37 static int ata_cmd_read(IORBH *pIorb, AD_INFO *ai, int p, int d, int slot, 38 ULONG sector, ULONG count, SCATGATENTRY *sg_list, 39 ULONG sg_cnt); 40 41 static int ata_cmd_write(IORBH *pIorb, AD_INFO *ai, int p, int d, int slot, 42 ULONG sector, ULONG count, SCATGATENTRY *sg_list, 43 43 ULONG sg_cnt, int write_through); 44 44 … … 81 81 int v_ata_cmd(AD_INFO *ai, int p, int d, int slot, int cmd, va_list va) 82 82 { 83 AHCI_PORT_DMA _far*dma_base_virt;84 AHCI_CMD_HDR _far*cmd_hdr;85 AHCI_CMD_TBL _far*cmd_tbl;86 SCATGATENTRY _far*sg_list = NULL;83 AHCI_PORT_DMA *dma_base_virt; 84 AHCI_CMD_HDR *cmd_hdr; 85 AHCI_CMD_TBL *cmd_tbl; 86 SCATGATENTRY *sg_list = NULL; 87 87 SCATGATENTRY sg_single; 88 88 ATA_PARM ap; 89 89 ATA_CMD ata_cmd; 90 void _far*atapi_cmd = NULL;90 void *atapi_cmd = NULL; 91 91 u32 dma_base_phys; 92 u 16atapi_cmd_len = 0;93 u 16ahci_flags = 0;94 u 16sg_cnt = 0;95 inti;96 intn;92 u32 atapi_cmd_len = 0; 93 u32 ahci_flags = 0; 94 u32 sg_cnt = 0; 95 u32 i; 96 u32 n; 97 97 98 98 /* -------------------------------------------------------------------------- … … 102 102 */ 103 103 memset(&ata_cmd, 0x00, sizeof(ata_cmd)); 104 ata_cmd.cmd = (u8)cmd;104 ata_cmd.cmd = cmd; 105 105 106 106 /* parse variable arguments */ 107 do { 108 switch ((ap = va_arg(va, ATA_PARM))) { 107 do 108 { 109 switch ((ap = va_arg(va, ATA_PARM))) 110 { 109 111 110 112 case AP_AHCI_FLAGS: 111 ahci_flags |= va_arg(va, u 16);113 ahci_flags |= va_arg(va, u32); 112 114 break; 113 115 114 116 case AP_WRITE: 115 if (va_arg(va, u16) != 0) { 117 if (va_arg(va, u32) != 0) 118 { 116 119 ahci_flags |= AHCI_CMD_WRITE; 117 120 } … … 120 123 case AP_FEATURES: 121 124 /* ATA features word */ 122 ata_cmd.features |= va_arg(va, u 16);125 ata_cmd.features |= va_arg(va, u32); 123 126 break; 124 127 125 128 case AP_COUNT: 126 129 /* transfer count */ 127 ata_cmd.count = va_arg(va, u 16);130 ata_cmd.count = va_arg(va, u32); 128 131 break; 129 132 … … 131 134 /* 28-bit sector address */ 132 135 ata_cmd.lba_l = va_arg(va, u32); 133 if (ata_cmd.lba_l & 0xf0000000UL) { 134 dprintf("error: LBA-28 address %ld has more than 28 bits\n", ata_cmd.lba_l); 136 if (ata_cmd.lba_l & 0xf0000000UL) 137 { 138 DPRINTF(0,"error: LBA-28 address %d has more than 28 bits\n", ata_cmd.lba_l); 135 139 return(ATA_CMD_INVALID_PARM); 136 140 } … … 144 148 /* 48-bit sector address */ 145 149 ata_cmd.lba_l = va_arg(va, u32); 146 ata_cmd.lba_h = va_arg(va, u 16);150 ata_cmd.lba_h = va_arg(va, u32); 147 151 break; 148 152 … … 150 154 /* ATA device byte; note that this byte contains the highest 151 155 * 4 bits of LBA-28 address; we have to leave them alone here. */ 152 ata_cmd.device |= va_arg(va, u 16) & 0xf0U;156 ata_cmd.device |= va_arg(va, u32) & 0xf0; 153 157 break; 154 158 155 159 case AP_SGLIST: 156 160 /* scatter/gather list in SCATGATENTRY/count format */ 157 sg_list = va_arg(va, void _far*);158 sg_cnt = va_arg(va, u 16);161 sg_list = va_arg(va, void *); 162 sg_cnt = va_arg(va, u32); 159 163 break; 160 164 161 165 case AP_VADDR: 162 166 /* virtual buffer address in addr/len format (up to 4K) */ 163 sg_single.ppXferBuf = virt_to_phys(va_arg(va, void _far*));164 sg_single.XferBufLen = va_arg(va, u 16);167 sg_single.ppXferBuf = MemPhysAdr(va_arg(va, void *)); 168 sg_single.XferBufLen = va_arg(va, u32); 165 169 sg_list = &sg_single; 166 170 sg_cnt = 1; … … 169 173 case AP_ATAPI_CMD: 170 174 /* ATAPI command */ 171 atapi_cmd = va_arg(va, void _far*);172 atapi_cmd_len = va_arg(va, u 16);175 atapi_cmd = va_arg(va, void *); 176 atapi_cmd_len = va_arg(va, u32); 173 177 ahci_flags |= AHCI_CMD_ATAPI; 174 178 break; … … 176 180 case AP_ATA_CMD: 177 181 /* ATA command "pass-through" */ 178 memcpy(&ata_cmd, va_arg(va, void _far*), sizeof(ATA_CMD));182 memcpy(&ata_cmd, va_arg(va, void *), sizeof(ATA_CMD)); 179 183 break; 180 184 … … 183 187 184 188 default: 185 dprintf("error: v_ata_cmd() called with invalid parameter type (%d)\n", (int) ap);189 DPRINTF(0,"error: v_ata_cmd() called with invalid parameter type (%d)\n", (int) ap); 186 190 return(ATA_CMD_INVALID_PARM); 187 191 } … … 212 216 213 217 /* AHCI command header */ 214 cmd_hdr = dma_base_virt->cmd_hdr + slot;218 cmd_hdr = &dma_base_virt->cmd_hdr[slot]; 215 219 memset(cmd_hdr, 0x00, sizeof(*cmd_hdr)); 216 220 cmd_hdr->options = ((d & 0x0f) << 12); … … 218 222 cmd_hdr->options |= 5; /* length of command FIS in 32-bit words */ 219 223 cmd_hdr->tbl_addr = dma_base_phys + offsetof(AHCI_PORT_DMA, cmd_tbl[slot]); 224 /* DAZ can use MemPhysAdr(&dma_base_virt->cmd_tbl[slot]), but is probably slower. */ 220 225 221 226 /* AHCI command table */ 222 cmd_tbl = dma_base_virt->cmd_tbl + slot;227 cmd_tbl = &dma_base_virt->cmd_tbl[slot]; 223 228 memset(cmd_tbl, 0x00, sizeof(*cmd_tbl)); 224 229 ata_cmd_to_fis(cmd_tbl->cmd_fis, &ata_cmd, d); 225 230 226 if (atapi_cmd != NULL) { 231 if (atapi_cmd != NULL) 232 { 227 233 /* copy ATAPI command */ 228 234 memcpy(cmd_tbl->atapi_cmd, atapi_cmd, atapi_cmd_len); … … 250 256 * successfully mapped. 251 257 */ 252 for (i = n = 0; i < sg_cnt; i++) { 258 for (i = n = 0; i < sg_cnt; i++) 259 { 253 260 u32 sg_addr = sg_list[i].ppXferBuf; 254 261 u32 sg_size = sg_list[i].XferBufLen; 255 262 256 do { 263 do 264 { 257 265 u32 chunk = (sg_size > AHCI_MAX_SG_ELEMENT_LEN) ? AHCI_MAX_SG_ELEMENT_LEN 258 266 : sg_size; 259 if (n >= AHCI_MAX_SG) { 267 if (n >= AHCI_MAX_SG) 268 { 260 269 /* couldn't store all S/G elements in our DMA buffer */ 261 ddprintf("ata_cmd(): too many S/G elements\n");270 DPRINTF(0,"ata_cmd(): too many S/G elements\n"); 262 271 return(i - 1); 263 272 } 264 if ((sg_addr & 1) || (chunk & 1)) { 265 ddprintf("error: ata_cmd() called with unaligned S/G element(s)\n"); 273 if ((sg_addr & 1) || (chunk & 1)) 274 { 275 DPRINTF(0,"error: ata_cmd() called with unaligned S/G element(s)\n"); 266 276 return(ATA_CMD_UNALIGNED_ADDR); 267 277 } … … 275 285 276 286 /* set final S/G count in AHCI command header */ 277 cmd_hdr->options |= (u32) n << 16; 278 279 if (debug >= 2) { 280 aprintf("ATA command for %d.%d.%d, slot %d:\n", ad_no(ai), p, d, slot); 281 phex(cmd_hdr, offsetof(AHCI_CMD_HDR, reserved), "cmd_hdr: "); 282 phex(&ata_cmd, sizeof(ata_cmd), "ata_cmd: "); 283 if (atapi_cmd != NULL) { 284 phex(atapi_cmd, atapi_cmd_len, "atapi_cmd: "); 285 } 286 if (n > 0) { 287 phex(cmd_tbl->sg_list, sizeof(*cmd_tbl->sg_list) * n, "sg_list: "); 287 cmd_hdr->options |= n << 16; 288 289 if (D32g_DbgLevel >= 2) 290 { 291 DPRINTF(2,"ATA command for %d.%d.%d, slot %d:\n", ad_no(ai), p, d, slot); 292 dHexDump(0,cmd_hdr, offsetof(AHCI_CMD_HDR, reserved), "cmd_hdr: "); 293 dHexDump(0,&ata_cmd, sizeof(ata_cmd), "ata_cmd: "); 294 if (atapi_cmd != NULL) 295 { 296 dHexDump(0,atapi_cmd, atapi_cmd_len, "atapi_cmd: "); 297 } 298 if (n > 0) 299 { 300 dHexDump(0,cmd_tbl->sg_list, sizeof(*cmd_tbl->sg_list) * n, "sg_list: "); 288 301 } 289 302 } … … 312 325 * +----------------+----------------+----------------+----------------+ 313 326 */ 314 void ata_cmd_to_fis(u8 _far *fis, ATA_CMD _far*ata_cmd, int d)327 void ata_cmd_to_fis(u8 *fis, ATA_CMD *ata_cmd, int d) 315 328 { 316 329 fis[0] = 0x27; /* register - host to device FIS */ … … 344 357 * lists. 345 358 */ 346 u16 ata_get_sg_indx(IORB_EXECUTEIO _far*io)359 u16 ata_get_sg_indx(IORB_EXECUTEIO *io) 347 360 { 348 361 ULONG offset = io->BlocksXferred * io->BlockSize; 362 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 349 363 USHORT i; 350 364 351 for (i = 0; i < io->cSGList && offset > 0; i++) { 352 offset -= io->pSGList[i].XferBufLen; 365 for (i = 0; i < io->cSGList && offset > 0; i++) 366 { 367 offset -= pSGList[i].XferBufLen; 353 368 } 354 369 … … 380 395 * boundaries which will still fit into our HW S/G list. 381 396 */ 382 void ata_max_sg_cnt(IORB_EXECUTEIO _far*io, USHORT sg_indx, USHORT sg_max,383 USHORT _far *sg_cnt, USHORT _far*sector_cnt)397 void ata_max_sg_cnt(IORB_EXECUTEIO *io, USHORT sg_indx, USHORT sg_max, 398 USHORT *sg_cnt, USHORT *sector_cnt) 384 399 { 385 400 ULONG max_sector_cnt = 0; … … 387 402 ULONG offset = 0; 388 403 USHORT i; 389 390 for (i = sg_indx; i < io->cSGList; i++) { 391 if (i - sg_indx >= sg_max) { 404 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 405 406 for (i = sg_indx; i < io->cSGList; i++) 407 { 408 if (i - sg_indx >= sg_max) 409 { 392 410 /* we're beyond the number of S/G elements we can map */ 393 411 break; 394 412 } 395 413 396 offset += io->pSGList[i].XferBufLen; 397 if (offset % io->BlockSize == 0) { 414 offset += pSGList[i].XferBufLen; 415 if (offset % io->BlockSize == 0) 416 { 398 417 /* this S/G element ends on a sector boundary */ 399 418 max_sector_cnt = offset / io->BlockSize; … … 414 433 * and handled by atapi_get_geometry(). 415 434 */ 416 int ata_get_geometry(IORBH _far *iorb, int slot)417 { 418 ADD_WORKSPACE _far *aws = add_workspace(iorb);435 int ata_get_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 436 { 437 ADD_WORKSPACE *aws = add_workspace(pIorb); 419 438 int rc; 420 439 421 440 /* allocate buffer for ATA identify information */ 422 if ((aws->buf = malloc(ATA_ID_WORDS * sizeof(u16))) == NULL) { 423 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 441 if ((aws->buf = MemAlloc(ATA_ID_WORDS * sizeof(u16))) == NULL) 442 { 443 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 424 444 return(-1); 425 445 } … … 427 447 /* request ATA identify information */ 428 448 aws->ppfunc = ata_get_geometry_pp; 429 rc = ata_cmd(ad_infos + iorb_unit_adapter( iorb),430 iorb_unit_port( iorb),431 iorb_unit_device( iorb),449 rc = ata_cmd(ad_infos + iorb_unit_adapter(pIorb), 450 iorb_unit_port(pIorb), 451 iorb_unit_device(pIorb), 432 452 slot, 433 453 ATA_CMD_ID_ATA, 434 AP_VADDR, (void _far*) aws->buf, ATA_ID_WORDS * sizeof(u16),454 AP_VADDR, (void *) aws->buf, ATA_ID_WORDS * sizeof(u16), 435 455 AP_END); 436 456 437 if (rc != 0) { 438 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 457 if (rc != 0) 458 { 459 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 439 460 } 440 461 … … 445 466 * geometry to the last full cylinder. 446 467 */ 447 int adjust_cylinders(GEOMETRY _far *geometry, ULONG TotalSectors) { 468 int adjust_cylinders(GEOMETRY *geometry, ULONG TotalSectors) 469 { 448 470 USHORT SecPerCyl; 449 471 int rc = FALSE; … … 451 473 geometry->TotalSectors = TotalSectors; 452 474 SecPerCyl = geometry->SectorsPerTrack * geometry->NumHeads; 453 if (SecPerCyl > 0) { 475 if (SecPerCyl > 0) 476 { 454 477 ULONG TotalCylinders = TotalSectors / SecPerCyl; 455 478 456 479 geometry->TotalSectors = TotalCylinders * SecPerCyl; 457 480 geometry->TotalCylinders = TotalCylinders; 458 if (TotalCylinders >> 16) { 481 if (TotalCylinders >> 16) 482 { 459 483 geometry->TotalCylinders = 65535; 460 484 rc = TRUE; … … 470 494 #define BIOS_MAX_NUMHEADS 255 471 495 #define BIOS_MAX_SECTORSPERTRACK 63 472 void log_geom_calculate_LBA_assist(GEOMETRY _far*geometry, ULONG TotalSectors)496 void log_geom_calculate_LBA_assist(GEOMETRY *geometry, ULONG TotalSectors) 473 497 { 474 498 UCHAR numSpT = BIOS_MAX_SECTORSPERTRACK; … … 476 500 ULONG Cylinders; 477 501 478 if (TotalSectors <= (BIOS_MAX_CYLINDERS * 128 * BIOS_MAX_SECTORSPERTRACK)) { 502 if (TotalSectors <= (BIOS_MAX_CYLINDERS * 128 * BIOS_MAX_SECTORSPERTRACK)) 503 { 479 504 USHORT temp = (TotalSectors - 1) / (BIOS_MAX_CYLINDERS * BIOS_MAX_SECTORSPERTRACK); 480 505 … … 485 510 } 486 511 487 do { 512 do 513 { 488 514 Cylinders = TotalSectors / (USHORT)(numHeads * numSpT); 489 if (Cylinders >> 16) { 515 if (Cylinders >> 16) 516 { 490 517 if (numSpT < 128) 491 518 numSpT = (numSpT << 1) | 1; … … 500 527 } 501 528 502 int check_lvm(IORBH _far *iorb, ULONG sector)503 { 504 DLA_Table_Sector *pDLA = (DLA_Table_Sector*)add_workspace( iorb)->buf;505 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);506 GEOMETRY _far *geometry = ((IORB_GEOMETRY _far *) iorb)->pGeometry;507 int p = iorb_unit_port( iorb);529 int check_lvm(IORBH *pIorb, ULONG sector) 530 { 531 DLA_Table_Sector *pDLA = (DLA_Table_Sector*)add_workspace(pIorb)->buf; 532 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 533 GEOMETRY *geometry = ((IORB_GEOMETRY*)pIorb)->pGeometry; 534 int p = iorb_unit_port(pIorb); 508 535 int rc; 509 536 510 537 rc = ahci_exec_polled_cmd(ai, p, 0, 500, ATA_CMD_READ, 511 AP_SECTOR_28, (u32)sector-1,512 AP_COUNT, (u16)1,513 AP_VADDR, (void _far *)pDLA, 512,538 AP_SECTOR_28, sector-1, 539 AP_COUNT, 1, 540 AP_VADDR, (void *)pDLA, 512, 514 541 AP_DEVICE, 0x40, 515 542 AP_END); 516 543 if (rc) return 0; 517 544 518 ddphex(pDLA, sizeof(DLA_Table_Sector), "DLA sector %d:\n", sector-1);545 DHEXDUMP(3,pDLA, sizeof(DLA_Table_Sector), "DLA sector %d:\n", sector-1); 519 546 520 547 if ((pDLA->DLA_Signature1 == DLA_TABLE_SIGNATURE1) && (pDLA->DLA_Signature2 == DLA_TABLE_SIGNATURE2)) { 521 ddprintf("is_lvm_geometry found at sector %d\n", sector-1);548 DPRINTF(3,"is_lvm_geometry found at sector %d\n", sector-1); 522 549 geometry->TotalCylinders = pDLA->Cylinders; 523 550 geometry->NumHeads = pDLA->Heads_Per_Cylinder; … … 536 563 * return the saved values when ata_get_geometry() is called. 537 564 */ 538 int is_lvm_geometry(IORBH _far *iorb)539 { 540 GEOMETRY _far *geometry = ((IORB_GEOMETRY _far *) iorb)->pGeometry;565 int is_lvm_geometry(IORBH *pIorb) 566 { 567 GEOMETRY *geometry = ((IORB_GEOMETRY*)pIorb)->pGeometry; 541 568 ULONG sector; 542 569 543 570 if (init_complete) return 0; /* We cannot use ahci_exec_polled_cmd() after init_complete */ 544 571 545 if (use_lvm_info) { 572 if (use_lvm_info) 573 { 546 574 #ifdef DEBUG 547 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);548 int p = iorb_unit_port( iorb);549 int d = iorb_unit_device( iorb);550 ddprintf("is_lvm_geometry (%d.%d.%d)\n", ad_no(ai), p, d);575 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 576 int p = iorb_unit_port(pIorb); 577 int d = iorb_unit_device(pIorb); 578 DPRINTF(3,"is_lvm_geometry (%d.%d.%d)\n", ad_no(ai), p, d); 551 579 #endif 552 580 553 581 /* First check the sector reported by the hardware */ 554 if (check_lvm(iorb, geometry->SectorsPerTrack)) return 1; 555 556 for (sector = 255; sector >= 63; sector >>= 1) { 582 if (check_lvm(pIorb, geometry->SectorsPerTrack)) return 1; 583 584 for (sector = 255; sector >= 63; sector >>= 1) 585 { 557 586 if (sector == geometry->SectorsPerTrack) continue; 558 if (check_lvm( iorb, sector)) return 1;587 if (check_lvm(pIorb, sector)) return 1; 559 588 } 560 589 } … … 567 596 * information to OS/2 IOCC_GEOMETRY information. 568 597 */ 569 void ata_get_geometry_pp(IORBH _far *iorb)570 { 571 GEOMETRY _far *geometry = ((IORB_GEOMETRY _far *) iorb)->pGeometry;572 USHORT geometry_len = ((IORB_GEOMETRY _far *) iorb)->GeometryLen;573 u16 *id_buf = add_workspace( iorb)->buf;574 int a = iorb_unit_adapter( iorb);575 int p = iorb_unit_port( iorb);598 void ata_get_geometry_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb) 599 { 600 GEOMETRY *geometry = ((IORB_GEOMETRY*)pIorb)->pGeometry; 601 USHORT geometry_len = ((IORB_GEOMETRY *)pIorb)->GeometryLen; 602 u16 *id_buf = add_workspace(pIorb)->buf; 603 int a = iorb_unit_adapter(pIorb); 604 int p = iorb_unit_port(pIorb); 576 605 char *Method; 577 606 … … 607 636 608 637 /* extract total number of sectors */ 609 if (id_buf[ATA_ID_CFS_ENABLE_2] & 0x400) { 638 if (id_buf[ATA_ID_CFS_ENABLE_2] & 0x400) 639 { 610 640 /* 48-bit LBA supported */ 611 if (ATA_CAPACITY48_H(id_buf) != 0) { 641 if (ATA_CAPACITY48_H(id_buf) != 0) 642 { 612 643 /* more than 32 bits for number of sectors */ 613 dprintf("warning: limiting disk %d.%d.%d to 2TB\n",614 iorb_unit_adapter( iorb), iorb_unit_port(iorb),615 iorb_unit_device( iorb));644 DPRINTF(0,"warning: limiting disk %d.%d.%d to 2TB\n", 645 iorb_unit_adapter(pIorb), iorb_unit_port(pIorb), 646 iorb_unit_device(pIorb)); 616 647 geometry->TotalSectors = 0xffffffffUL; 617 } else { 648 } 649 else 650 { 618 651 geometry->TotalSectors = ATA_CAPACITY48_L(id_buf); 619 652 } 620 } else { 653 } 654 else 655 { 621 656 /* 28-bit LBA */ 622 657 geometry->TotalSectors = ATA_CAPACITY(id_buf) & 0x0fffffffUL; … … 625 660 Method = "None"; 626 661 /* fabricate the remaining geometry fields */ 627 if (track_size[a][p] != 0) { 662 if (track_size[a][p] != 0) 663 { 628 664 /* A specific track size has been requested for this port; this is 629 665 * typically done for disks with 4K sectors to make sure partitions … … 634 670 geometry->TotalCylinders = geometry->TotalSectors / ((u32) geometry->NumHeads * (u32) geometry->SectorsPerTrack); 635 671 Method = "Custom"; 636 } else if (CUR_HEADS(id_buf) > 0 && CUR_CYLS(id_buf) > 0 && CUR_SECTORS(id_buf) > 0 && 637 CUR_CAPACITY(id_buf) == (u32) CUR_HEADS(id_buf) * (u32) CUR_CYLS(id_buf) * (u32) CUR_SECTORS(id_buf)) { 672 } 673 else if (CUR_HEADS(id_buf) > 0 && CUR_CYLS(id_buf) > 0 && CUR_SECTORS(id_buf) > 0 && 674 CUR_CAPACITY(id_buf) == (u32) CUR_HEADS(id_buf) * (u32) CUR_CYLS(id_buf) * (u32) CUR_SECTORS(id_buf)) 675 { 638 676 /* BIOS-supplied (aka "current") geometry values look valid */ 639 677 geometry->NumHeads = CUR_HEADS(id_buf); … … 641 679 geometry->TotalCylinders = CUR_CYLS(id_buf); 642 680 Method = "BIOS"; 643 } else if (ATA_HEADS(id_buf) > 0 && ATA_CYLS(id_buf) > 0 && ATA_SECTORS(id_buf) > 0) { 681 } 682 else if (ATA_HEADS(id_buf) > 0 && ATA_CYLS(id_buf) > 0 && ATA_SECTORS(id_buf) > 0) 683 { 644 684 /* ATA-supplied values for geometry look valid */ 645 685 geometry->NumHeads = ATA_HEADS(id_buf); … … 647 687 geometry->TotalCylinders = ATA_CYLS(id_buf); 648 688 Method = "ATA"; 649 } else { 689 } 690 else 691 { 650 692 /* use typical SCSI geometry */ 651 693 geometry->NumHeads = 255; … … 655 697 } 656 698 657 dprintf("Physical geometry: %ld cylinders, %d heads, %d sectors per track (%ldMB) (%s)\n",658 (u32) geometry->TotalCylinders, (u16) geometry->NumHeads, (u16)geometry->SectorsPerTrack,659 ( u32) (geometry->TotalSectors / 2048), Method);699 DPRINTF(2,"Physical geometry: %d cylinders, %d heads, %d sectors per track (%dMB) (%s)\n", 700 geometry->TotalCylinders, geometry->NumHeads, geometry->SectorsPerTrack, 701 (geometry->TotalSectors / 2048), Method); 660 702 661 703 /* Fixup the geometry in case the geometry reported by the BIOS is bad */ 662 if (adjust_cylinders(geometry, geometry->TotalSectors)) { // cylinder overflow 704 if (adjust_cylinders(geometry, geometry->TotalSectors)) 705 { // cylinder overflow 663 706 log_geom_calculate_LBA_assist(geometry, geometry->TotalSectors); 664 707 geometry->TotalSectors = (USHORT)(geometry->NumHeads * geometry->SectorsPerTrack) * (ULONG)geometry->TotalCylinders; … … 666 709 adjust_cylinders(geometry, geometry->TotalSectors); 667 710 668 dprintf("Logical geometry: %ld cylinders, %d heads, %d sectors per track (%ldMB) (%s)\n",669 (u32) geometry->TotalCylinders, (u16) geometry->NumHeads, (u16)geometry->SectorsPerTrack,670 ( u32) (geometry->TotalSectors / 2048), Method);671 672 if (is_lvm_geometry( iorb)) Method = "LVM";711 DPRINTF(2,"Logical geometry: %d cylinders, %d heads, %d sectors per track (%dMB) (%s)\n", 712 geometry->TotalCylinders, geometry->NumHeads, geometry->SectorsPerTrack, 713 (geometry->TotalSectors / 2048), Method); 714 715 if (is_lvm_geometry(pIorb)) Method = "LVM"; 673 716 ad_infos[a].ports[p].devs[0].dev_info.Cylinders = geometry->TotalCylinders; 674 717 ad_infos[a].ports[p].devs[0].dev_info.HeadsPerCylinder = geometry->NumHeads; … … 677 720 ad_infos[a].ports[p].devs[0].dev_info.Method = Method; 678 721 679 dprintf("Reported geometry: %ld cylinders, %d heads, %d sectors per track (%ldMB) (%s)\n",680 (u32) geometry->TotalCylinders, (u16) geometry->NumHeads, (u16)geometry->SectorsPerTrack,681 ( u32) (geometry->TotalSectors / 2048), Method);722 DPRINTF(2,"Reported geometry: %d cylinders, %d heads, %d sectors per track (%dMB) (%s)\n", 723 geometry->TotalCylinders, geometry->NumHeads, geometry->SectorsPerTrack, 724 (geometry->TotalSectors / 2048), Method); 682 725 683 726 /* tell interrupt handler that this IORB is complete */ 684 add_workspace( iorb)->complete = 1;727 add_workspace(pIorb)->complete = 1; 685 728 } 686 729 … … 688 731 * Test whether unit is ready. 689 732 */ 690 int ata_unit_ready(IORBH _far *iorb, int slot)733 int ata_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 691 734 { 692 735 /* This is a NOP for ATA devices (at least right now); returning an error … … 694 737 * HW command and the IORB will complete successfully. 695 738 */ 696 ((IORB_UNIT_STATUS _far *) iorb)->UnitStatus = US_READY | US_POWER;739 ((IORB_UNIT_STATUS *)pIorb)->UnitStatus = US_READY | US_POWER; 697 740 return(-1); 698 741 } … … 701 744 * Read sectors from AHCI device. 702 745 */ 703 int ata_read(IORBH _far *iorb, int slot) 704 { 705 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 706 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb); 746 int ata_read(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 747 { 748 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 749 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 750 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 707 751 ULONG sector = io->RBA + io->BlocksXferred; 708 752 USHORT count = io->BlockCount - io->BlocksXferred; 709 753 USHORT sg_indx; 710 754 USHORT sg_cnt; 711 int p = iorb_unit_port( iorb);712 int d = iorb_unit_device( iorb);755 int p = iorb_unit_port(pIorb); 756 int d = iorb_unit_device(pIorb); 713 757 int rc; 714 758 715 if (io->BlockCount == 0) { 759 if (io->BlockCount == 0) 760 { 716 761 /* NOP; return -1 without error in IORB to indicate success */ 717 762 return(-1); 718 763 } 719 764 720 if (add_workspace(iorb)->unaligned) { 765 if (add_workspace(pIorb)->unaligned) 766 { 721 767 /* unaligned S/G addresses present; need to use double buffers */ 722 return(ata_read_unaligned( iorb, slot));768 return(ata_read_unaligned(pIorb, slot)); 723 769 } 724 770 … … 729 775 */ 730 776 if (io->BlocksXferred == 0 && io->cSGList == 1 && 731 io->pSGList[0].XferBufLen > (ULONG) io->BlockCount * io->BlockSize) { 732 io->pSGList[0].XferBufLen = (ULONG) io->BlockCount * io->BlockSize; 777 pSGList[0].XferBufLen > (ULONG) io->BlockCount * io->BlockSize) 778 { 779 pSGList[0].XferBufLen = (ULONG) io->BlockCount * io->BlockSize; 733 780 } 734 781 735 782 /* prepare read command while keeping an eye on S/G count limitations */ 736 do { 783 do 784 { 737 785 sg_indx = ata_get_sg_indx(io); 738 786 sg_cnt = io->cSGList - sg_indx; 739 if ((rc = ata_cmd_read(iorb, ai, p, d, slot, sector, count, 740 io->pSGList + sg_indx, sg_cnt)) > 0) { 787 if ((rc = ata_cmd_read(pIorb, ai, p, d, slot, sector, count, 788 pSGList + sg_indx, sg_cnt)) > 0) 789 { 741 790 /* couldn't map all S/G elements */ 742 ata_max_sg_cnt(io, sg_indx, (USHORT)rc, &sg_cnt, &count);791 ata_max_sg_cnt(io, sg_indx, rc, &sg_cnt, &count); 743 792 } 744 793 } while (rc > 0 && sg_cnt > 0); 745 794 746 if (rc == 0) { 747 add_workspace(iorb)->blocks = count; 748 add_workspace(iorb)->ppfunc = ata_read_pp; 749 750 } else if (rc > 0) { 751 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 752 753 } else if (rc == ATA_CMD_UNALIGNED_ADDR) { 795 if (rc == 0) 796 { 797 add_workspace(pIorb)->blocks = count; 798 add_workspace(pIorb)->ppfunc = ata_read_pp; 799 } 800 else if (rc > 0) 801 { 802 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 803 } 804 else if (rc == ATA_CMD_UNALIGNED_ADDR) 805 { 754 806 /* unaligned S/G addresses detected; need to use double buffers */ 755 add_workspace(iorb)->unaligned = 1; 756 return(ata_read_unaligned(iorb, slot)); 757 758 } else { 759 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 807 add_workspace(pIorb)->unaligned = 1; 808 return(ata_read_unaligned(pIorb, slot)); 809 810 } 811 else 812 { 813 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 760 814 } 761 815 … … 769 823 * use a transfer buffer and copy the data manually. 770 824 */ 771 int ata_read_unaligned(IORBH _far *iorb, int slot)772 { 773 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb;774 ADD_WORKSPACE _far *aws = add_workspace(iorb);775 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);825 int ata_read_unaligned(IORBH *pIorb, int slot) 826 { 827 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 828 ADD_WORKSPACE *aws = add_workspace(pIorb); 829 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 776 830 ULONG sector = io->RBA + io->BlocksXferred; 777 831 SCATGATENTRY sg_single; 778 int p = iorb_unit_port( iorb);779 int d = iorb_unit_device( iorb);832 int p = iorb_unit_port(pIorb); 833 int d = iorb_unit_device(pIorb); 780 834 int rc; 781 835 782 ddprintf("ata_read_unaligned(%d.%d.%d, %ld)\n", ad_no(ai), p, d, sector);836 DPRINTF(3,"ata_read_unaligned(%d.%d.%d, %d)\n", ad_no(ai), p, d, sector); 783 837 784 838 /* allocate transfer buffer */ 785 if ((aws->buf = malloc(io->BlockSize)) == NULL) { 786 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 839 if ((aws->buf = MemAlloc(io->BlockSize)) == NULL) 840 { 841 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 787 842 return(-1); 788 843 } 789 844 790 845 /* prepare read command using transfer buffer */ 791 sg_single.ppXferBuf = virt_to_phys(aws->buf);846 sg_single.ppXferBuf = MemPhysAdr(aws->buf); 792 847 sg_single.XferBufLen = io->BlockSize; 793 rc = ata_cmd_read( iorb, ai, p, d, slot, sector, 1, &sg_single, 1);848 rc = ata_cmd_read(pIorb, ai, p, d, slot, sector, 1, &sg_single, 1); 794 849 795 850 if (rc == 0) { 796 add_workspace( iorb)->blocks = 1;797 add_workspace( iorb)->ppfunc = ata_read_pp;851 add_workspace(pIorb)->blocks = 1; 852 add_workspace(pIorb)->ppfunc = ata_read_pp; 798 853 799 854 } else if (rc > 0) { 800 iorb_seterr( iorb, IOERR_CMD_SGLIST_BAD);855 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 801 856 802 857 } else { 803 iorb_seterr( iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE);858 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 804 859 } 805 860 … … 813 868 * takes care of copying data from the transfer buffer for unaligned reads. 814 869 */ 815 void ata_read_pp(IORBH _far *iorb) 816 { 817 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 818 ADD_WORKSPACE _far *aws = add_workspace(iorb); 819 820 if (aws->unaligned) { 870 void ata_read_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb) 871 { 872 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 873 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 874 ADD_WORKSPACE *aws = add_workspace(pIorb); 875 876 if (aws->unaligned) 877 { 821 878 /* copy transfer buffer to corresponding physical address in S/G list */ 822 sg_memcpy( io->pSGList, io->cSGList,879 sg_memcpy(pSGList, io->cSGList, 823 880 (ULONG) io->BlocksXferred * (ULONG) io->BlockSize, 824 881 aws->buf, io->BlockSize, BUF_TO_SG); 825 882 } 826 883 827 io->BlocksXferred += add_workspace(iorb)->blocks; 828 ddprintf("ata_read_pp(): blocks transferred = %d\n", (int) io->BlocksXferred); 829 830 if (io->BlocksXferred >= io->BlockCount) { 884 io->BlocksXferred += add_workspace(pIorb)->blocks; 885 DPRINTF(3,"ata_read_pp(): blocks transferred = %d\n", io->BlocksXferred); 886 887 if (io->BlocksXferred >= io->BlockCount) 888 { 831 889 /* we're done; tell IRQ handler the IORB is complete */ 832 add_workspace(iorb)->complete = 1; 833 } else { 890 add_workspace(pIorb)->complete = 1; 891 } 892 else 893 { 834 894 /* requeue this IORB for next iteration */ 835 iorb_requeue( iorb);895 iorb_requeue(pIorb); 836 896 } 837 897 } … … 840 900 * Verify readability of sectors on ATA device. 841 901 */ 842 int ata_verify(IORBH _far *iorb, int slot)843 { 844 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb;845 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);846 int p = iorb_unit_port( iorb);847 int d = iorb_unit_device( iorb);902 int ata_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 903 { 904 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 905 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 906 int p = iorb_unit_port(pIorb); 907 int d = iorb_unit_device(pIorb); 848 908 int rc; 849 909 850 if (io->BlockCount == 0) { 910 if (io->BlockCount == 0) 911 { 851 912 /* NOP; return -1 without error in IORB to indicate success */ 852 913 return(-1); … … 854 915 855 916 /* prepare verify command */ 856 if (io->RBA >= (1UL << 28) || io->BlockCount > 256) { 917 if (io->RBA >= (1UL << 28) || io->BlockCount > 256) 918 { 857 919 /* need LBA48 for this command */ 858 920 if (!ai->ports[p].devs[d].lba48) { 859 iorb_seterr( iorb, IOERR_RBA_LIMIT);921 iorb_seterr(pIorb, IOERR_RBA_LIMIT); 860 922 return(-1); 861 923 } 862 924 rc = ata_cmd(ai, p, d, slot, ATA_CMD_VERIFY_EXT, 863 AP_SECTOR_48, (u32) io->RBA, (u16)0,864 AP_COUNT, (u16)io->BlockCount,925 AP_SECTOR_48, io->RBA, 0, 926 AP_COUNT, io->BlockCount, 865 927 AP_DEVICE, 0x40, 866 928 AP_END); 867 929 } else { 868 930 rc = ata_cmd(ai, p, d, slot, ATA_CMD_VERIFY, 869 AP_SECTOR_28, (u32)io->RBA,870 AP_COUNT, (u16)io->BlockCount & 0xffU,931 AP_SECTOR_28, io->RBA, 932 AP_COUNT, io->BlockCount & 0xffU, 871 933 AP_DEVICE, 0x40, 872 934 AP_END); … … 879 941 * Write sectors to AHCI device. 880 942 */ 881 int ata_write(IORBH _far *iorb, int slot) 882 { 883 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 884 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb); 943 int ata_write(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 944 { 945 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 946 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 947 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 885 948 ULONG sector = io->RBA + io->BlocksXferred; 886 949 USHORT count = io->BlockCount - io->BlocksXferred; 887 950 USHORT sg_indx; 888 951 USHORT sg_cnt; 889 int p = iorb_unit_port( iorb);890 int d = iorb_unit_device( iorb);952 int p = iorb_unit_port(pIorb); 953 int d = iorb_unit_device(pIorb); 891 954 int rc; 892 955 893 if (io->BlockCount == 0) { 956 if (io->BlockCount == 0) 957 { 894 958 /* NOP; return -1 without error in IORB to indicate success */ 895 959 return(-1); 896 960 } 897 961 898 if (add_workspace(iorb)->unaligned) { 962 if (add_workspace(pIorb)->unaligned) 963 { 899 964 /* unaligned S/G addresses present; need to use double buffers */ 900 return(ata_write_unaligned( iorb, slot));965 return(ata_write_unaligned(pIorb, slot)); 901 966 } 902 967 … … 905 970 sg_indx = ata_get_sg_indx(io); 906 971 sg_cnt = io->cSGList - sg_indx; 907 if ((rc = ata_cmd_write(iorb, ai, p, d, slot, sector, count, 908 io->pSGList + sg_indx, sg_cnt, 909 io->Flags & XIO_DISABLE_HW_WRITE_CACHE)) > 0) { 972 if ((rc = ata_cmd_write(pIorb, ai, p, d, slot, sector, count, 973 pSGList + sg_indx, sg_cnt, 974 io->Flags & XIO_DISABLE_HW_WRITE_CACHE)) > 0) 975 { 910 976 /* couldn't map all S/G elements */ 911 977 ata_max_sg_cnt(io, sg_indx, (USHORT) rc, &sg_cnt, &count); … … 913 979 } while (rc > 0 && sg_cnt > 0); 914 980 915 if (rc == 0) { 916 add_workspace(iorb)->blocks = count; 917 add_workspace(iorb)->ppfunc = ata_write_pp; 918 919 } else if (rc > 0) { 920 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 921 922 } else if (rc == ATA_CMD_UNALIGNED_ADDR) { 981 if (rc == 0) 982 { 983 add_workspace(pIorb)->blocks = count; 984 add_workspace(pIorb)->ppfunc = ata_write_pp; 985 } 986 else if (rc > 0) 987 { 988 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 989 } 990 else if (rc == ATA_CMD_UNALIGNED_ADDR) 991 { 923 992 /* unaligned S/G addresses detected; need to use double buffers */ 924 add_workspace(iorb)->unaligned = 1; 925 return(ata_write_unaligned(iorb, slot)); 926 927 } else { 928 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 993 add_workspace(pIorb)->unaligned = 1; 994 return(ata_write_unaligned(pIorb, slot)); 995 } 996 else 997 { 998 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 929 999 } 930 1000 … … 938 1008 * use a transfer buffer and copy the data manually. 939 1009 */ 940 int ata_write_unaligned(IORBH _far *iorb, int slot) 941 { 942 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 943 ADD_WORKSPACE _far *aws = add_workspace(iorb); 944 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb); 1010 int ata_write_unaligned(IORBH *pIorb, int slot) 1011 { 1012 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 1013 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 1014 ADD_WORKSPACE *aws = add_workspace(pIorb); 1015 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 945 1016 ULONG sector = io->RBA + io->BlocksXferred; 946 1017 SCATGATENTRY sg_single; 947 int p = iorb_unit_port( iorb);948 int d = iorb_unit_device( iorb);1018 int p = iorb_unit_port(pIorb); 1019 int d = iorb_unit_device(pIorb); 949 1020 int rc; 950 1021 951 ddprintf("ata_write_unaligned(%d.%d.%d, %ld)\n", ad_no(ai), p, d, sector);1022 DPRINTF(3,"ata_write_unaligned(%d.%d.%d, %d)\n", ad_no(ai), p, d, sector); 952 1023 953 1024 /* allocate transfer buffer */ 954 if ((aws->buf = malloc(io->BlockSize)) == NULL) { 955 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 1025 if ((aws->buf = MemAlloc(io->BlockSize)) == NULL) 1026 { 1027 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 956 1028 return(-1); 957 1029 } 958 1030 959 1031 /* copy next sector from S/G list to transfer buffer */ 960 sg_memcpy( io->pSGList, io->cSGList,1032 sg_memcpy(pSGList, io->cSGList, 961 1033 (ULONG) io->BlocksXferred * (ULONG) io->BlockSize, 962 1034 aws->buf, io->BlockSize, SG_TO_BUF); 963 1035 964 1036 /* prepare write command using transfer buffer */ 965 sg_single.ppXferBuf = virt_to_phys(aws->buf);1037 sg_single.ppXferBuf = MemPhysAdr(aws->buf); 966 1038 sg_single.XferBufLen = io->BlockSize; 967 rc = ata_cmd_write( iorb, ai, p, d, slot, sector, 1, &sg_single, 1,1039 rc = ata_cmd_write(pIorb, ai, p, d, slot, sector, 1, &sg_single, 1, 968 1040 io->Flags & XIO_DISABLE_HW_WRITE_CACHE); 969 1041 970 if (rc == 0) { 971 add_workspace(iorb)->blocks = 1; 972 add_workspace(iorb)->ppfunc = ata_write_pp; 973 974 } else if (rc > 0) { 975 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 976 977 } else { 978 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 1042 if (rc == 0) 1043 { 1044 add_workspace(pIorb)->blocks = 1; 1045 add_workspace(pIorb)->ppfunc = ata_write_pp; 1046 } 1047 else if (rc > 0) 1048 { 1049 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 1050 } 1051 else 1052 { 1053 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 979 1054 } 980 1055 … … 988 1063 * transferred, requeues the IORB to process the remaining sectors. 989 1064 */ 990 void ata_write_pp(IORBH _far *iorb) 991 { 992 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 993 994 io->BlocksXferred += add_workspace(iorb)->blocks; 995 ddprintf("ata_write_pp(): blocks transferred = %d\n", (int) io->BlocksXferred); 996 997 if (io->BlocksXferred >= io->BlockCount) { 1065 void ata_write_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1066 { 1067 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 1068 1069 io->BlocksXferred += add_workspace(pIorb)->blocks; 1070 DPRINTF(3,"ata_write_pp(): blocks transferred = %d\n", io->BlocksXferred); 1071 1072 if (io->BlocksXferred >= io->BlockCount) 1073 { 998 1074 /* we're done; tell IRQ handler the IORB is complete */ 999 add_workspace(iorb)->complete = 1; 1000 } else { 1075 add_workspace(pIorb)->complete = 1; 1076 } 1077 else 1078 { 1001 1079 /* requeue this IORB for next iteration */ 1002 iorb_requeue( iorb);1080 iorb_requeue(pIorb); 1003 1081 } 1004 1082 } … … 1007 1085 * Execute ATA command. 1008 1086 */ 1009 int ata_execute_ata(IORBH _far *iorb, int slot) 1010 { 1011 IORB_ADAPTER_PASSTHRU _far *apt = (IORB_ADAPTER_PASSTHRU _far *) iorb; 1012 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb); 1013 int p = iorb_unit_port(iorb); 1014 int d = iorb_unit_device(iorb); 1087 int ata_execute_ata(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 1088 { 1089 IORB_ADAPTER_PASSTHRU *apt = (IORB_ADAPTER_PASSTHRU *)pIorb; 1090 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(apt->pSGList); 1091 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 1092 int p = iorb_unit_port(pIorb); 1093 int d = iorb_unit_device(pIorb); 1015 1094 int rc; 1016 1095 1017 if (apt->ControllerCmdLen != sizeof(ATA_CMD)) { 1018 iorb_seterr(iorb, IOERR_CMD_SYNTAX); 1096 if (apt->ControllerCmdLen != sizeof(ATA_CMD)) 1097 { 1098 iorb_seterr(pIorb, IOERR_CMD_SYNTAX); 1019 1099 return(-1); 1020 1100 } 1021 1101 1022 1102 rc = ata_cmd(ai, p, d, slot, 0, 1023 AP_SGLIST, apt->pSGList, apt->cSGList,1024 AP_ATA_CMD, apt->pControllerCmd,1103 AP_SGLIST, pSGList, apt->cSGList, 1104 AP_ATA_CMD, Far16ToFlat(apt->pControllerCmd), 1025 1105 AP_WRITE, !(apt->Flags & PT_DIRECTION_IN), 1026 1106 AP_END); 1027 1107 1028 if (rc == 0) { 1029 add_workspace(iorb)->ppfunc = ata_execute_ata_pp; 1108 if (rc == 0) 1109 { 1110 add_workspace(pIorb)->ppfunc = ata_execute_ata_pp; 1030 1111 } 1031 1112 … … 1040 1121 * See ata_cmd_to_fis() for an explanation of the mapping. 1041 1122 */ 1042 void ata_execute_ata_pp(IORBH _far *iorb)1043 { 1044 AHCI_PORT_DMA _far*dma_base;1045 ATA_CMD _far*cmd;1123 void ata_execute_ata_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1124 { 1125 AHCI_PORT_DMA *dma_base; 1126 ATA_CMD *cmd; 1046 1127 AD_INFO *ai; 1047 u8 _far*fis;1128 u8 *fis; 1048 1129 int p; 1049 1130 1050 1131 /* get address of D2H FIS */ 1051 ai = ad_infos + iorb_unit_adapter( iorb);1052 p = iorb_unit_port( iorb);1132 ai = ad_infos + iorb_unit_adapter(pIorb); 1133 p = iorb_unit_port(pIorb); 1053 1134 dma_base = port_dma_base(ai, p); 1054 1135 fis = dma_base->rx_fis + 0x40; 1055 1136 1056 if (fis[0] != 0x34) { 1137 if (fis[0] != 0x34) 1138 { 1057 1139 /* this is not a D2H FIS - give up silently */ 1058 ddprintf("ata_execute_ata_pp(): D2H FIS type incorrect: %d\n", fis[0]);1059 add_workspace( iorb)->complete = 1;1140 DPRINTF(3,"ata_execute_ata_pp(): D2H FIS type incorrect: %d\n", fis[0]); 1141 add_workspace(pIorb)->complete = 1; 1060 1142 return; 1061 1143 } 1062 1144 1063 1145 /* map D2H FIS to the original ATA controller command structure */ 1064 cmd = (ATA_CMD _far *) ((IORB_ADAPTER_PASSTHRU _far *) iorb)->pControllerCmd;1146 cmd = (ATA_CMD *)Far16ToFlat(((IORB_ADAPTER_PASSTHRU*)pIorb)->pControllerCmd); 1065 1147 1066 1148 cmd->cmd = fis[2]; … … 1077 1159 | ((u16) fis[13] << 8); 1078 1160 1079 dphex(cmd, sizeof(*cmd), "ahci_execute_ata_pp(): cmd after completion:\n");1161 DHEXDUMP(0,cmd, sizeof(*cmd), "ahci_execute_ata_pp(): cmd after completion:\n"); 1080 1162 1081 1163 /* signal completion to interrupt handler */ 1082 add_workspace( iorb)->complete = 1;1164 add_workspace(pIorb)->complete = 1; 1083 1165 } 1084 1166 … … 1099 1181 * else with a generic error code. 1100 1182 */ 1101 int ata_req_sense(IORBH _far *iorb, int slot)1102 { 1103 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);1104 u8 _far *port_mmio = port_base(ai, iorb_unit_port(iorb));1183 int ata_req_sense(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 1184 { 1185 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 1186 u8 *port_mmio = port_base(ai, iorb_unit_port(pIorb)); 1105 1187 u32 tf_data = readl(port_mmio + PORT_TFDATA); 1106 u8 err = (u8) (tf_data >> 8); 1107 u8 sts = (u8) (tf_data); 1108 1109 if (sts & ATA_ERR) { 1110 if (sts & ATA_DF) { 1188 u8 err = (tf_data >> 8); 1189 u8 sts = (tf_data); 1190 1191 if (sts & ATA_ERR) 1192 { 1193 if (sts & ATA_DF) 1194 { 1111 1195 /* there is a device-specific error condition */ 1112 if (err & ATA_ICRC) { 1113 iorb_seterr(iorb, IOERR_ADAPTER_DEVICEBUSCHECK); 1114 } else if (err & ATA_UNC) { 1115 iorb_seterr(iorb, IOERR_MEDIA); 1116 } else if (err & ATA_IDNF) { 1117 iorb_seterr(iorb, IOERR_RBA_ADDRESSING_ERROR); 1118 } else { 1119 iorb_seterr(iorb, IOERR_DEVICE_NONSPECIFIC); 1196 if (err & ATA_ICRC) 1197 { 1198 iorb_seterr(pIorb, IOERR_ADAPTER_DEVICEBUSCHECK); 1120 1199 } 1121 1122 } else { 1123 iorb_seterr(iorb, IOERR_DEVICE_NONSPECIFIC); 1124 } 1125 } else { 1200 else if (err & ATA_UNC) 1201 { 1202 iorb_seterr(pIorb, IOERR_MEDIA); 1203 } 1204 else if (err & ATA_IDNF) 1205 { 1206 iorb_seterr(pIorb, IOERR_RBA_ADDRESSING_ERROR); 1207 } 1208 else 1209 { 1210 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 1211 } 1212 1213 } 1214 else 1215 { 1216 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 1217 } 1218 } 1219 else 1220 { 1126 1221 /* this function only gets called when we received an error interrupt */ 1127 iorb_seterr( iorb, IOERR_DEVICE_NONSPECIFIC);1222 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 1128 1223 } 1129 1224 … … 1162 1257 * device and the paramters set from above (NCQ, etc). 1163 1258 */ 1164 static int ata_cmd_read(IORBH _far *iorb, AD_INFO *ai, int p, int d, int slot,1165 ULONG sector, ULONG count, SCATGATENTRY _far*sg_list,1259 static int ata_cmd_read(IORBH *pIorb, AD_INFO *ai, int p, int d, int slot, 1260 ULONG sector, ULONG count, SCATGATENTRY *sg_list, 1166 1261 ULONG sg_cnt) 1167 1262 { 1168 1263 int rc; 1169 1264 1170 if (sector >= (1UL << 28) || count > 256 || add_workspace(iorb)->is_ncq) { 1265 if (sector >= (1UL << 28) || count > 256 || add_workspace(pIorb)->is_ncq) 1266 { 1171 1267 /* need LBA48 for this command */ 1172 if (!ai->ports[p].devs[d].lba48) { 1173 iorb_seterr(iorb, IOERR_RBA_LIMIT); 1268 if (!ai->ports[p].devs[d].lba48) 1269 { 1270 iorb_seterr(pIorb, IOERR_RBA_LIMIT); 1174 1271 return(-1); 1175 1272 } 1176 if (add_workspace(iorb)->is_ncq) { 1273 if (add_workspace(pIorb)->is_ncq) 1274 { 1177 1275 /* use NCQ read; count goes into feature register, tag into count! */ 1178 1276 rc = ata_cmd(ai, p, d, slot, ATA_CMD_FPDMA_READ, 1179 AP_SECTOR_48, (u32) sector, (u16)0,1180 AP_FEATURES, (u16)count,1181 AP_COUNT, ( u16) (slot << 3), /* tag == slot */1182 AP_SGLIST, sg_list, (u16)sg_cnt,1277 AP_SECTOR_48, sector, 0, 1278 AP_FEATURES, count, 1279 AP_COUNT, (slot << 3), /* tag == slot */ 1280 AP_SGLIST, sg_list, sg_cnt, 1183 1281 AP_DEVICE, 0x40, 1184 1282 AP_END); 1185 } else { 1283 } 1284 else 1285 { 1186 1286 rc = ata_cmd(ai, p, d, slot, ATA_CMD_READ_EXT, 1187 AP_SECTOR_48, (u32) sector, (u16)0,1188 AP_COUNT, (u16)count,1189 AP_SGLIST, sg_list, (u16)sg_cnt,1287 AP_SECTOR_48, sector, 0, 1288 AP_COUNT, count, 1289 AP_SGLIST, sg_list, sg_cnt, 1190 1290 AP_DEVICE, 0x40, 1191 1291 AP_END); 1192 1292 } 1193 1293 1194 } else { 1294 } 1295 else 1296 { 1195 1297 rc = ata_cmd(ai, p, d, slot, ATA_CMD_READ, 1196 AP_SECTOR_28, (u32)sector,1197 AP_COUNT, (u16)count & 0xffU,1198 AP_SGLIST, sg_list, (u16)sg_cnt,1298 AP_SECTOR_28, sector, 1299 AP_COUNT, count & 0xffU, 1300 AP_SGLIST, sg_list, sg_cnt, 1199 1301 AP_DEVICE, 0x40, 1200 1302 AP_END); … … 1208 1310 * device and the paramters set from above (NCQ, etc) 1209 1311 */ 1210 static int ata_cmd_write(IORBH _far *iorb, AD_INFO *ai, int p, int d, int slot,1211 ULONG sector, ULONG count, SCATGATENTRY _far*sg_list,1312 static int ata_cmd_write(IORBH *pIorb, AD_INFO *ai, int p, int d, int slot, 1313 ULONG sector, ULONG count, SCATGATENTRY *sg_list, 1212 1314 ULONG sg_cnt, int write_through) 1213 1315 { 1214 1316 int rc; 1215 1317 1216 if (sector >= (1UL << 28) || count > 256 || add_workspace(iorb)->is_ncq) { 1318 if (sector >= (1UL << 28) || count > 256 || add_workspace(pIorb)->is_ncq) 1319 { 1217 1320 /* need LBA48 for this command */ 1218 if (!ai->ports[p].devs[d].lba48) { 1219 iorb_seterr(iorb, IOERR_RBA_LIMIT); 1321 if (!ai->ports[p].devs[d].lba48) 1322 { 1323 iorb_seterr(pIorb, IOERR_RBA_LIMIT); 1220 1324 return(-1); 1221 1325 } 1222 if (add_workspace(iorb)->is_ncq) { 1326 if (add_workspace(pIorb)->is_ncq) 1327 { 1223 1328 /* use NCQ write; count goes into feature register, tag into count! */ 1224 1329 rc = ata_cmd(ai, p, d, slot, ATA_CMD_FPDMA_WRITE, 1225 AP_SECTOR_48, (u32) sector, (u16)0,1226 AP_FEATURES, (u16)count,1330 AP_SECTOR_48, sector, 0, 1331 AP_FEATURES, count, 1227 1332 /* tag = slot */ 1228 AP_COUNT, ( u16) (slot << 3),1229 AP_SGLIST, sg_list, (u16)sg_cnt,1333 AP_COUNT, (slot << 3), 1334 AP_SGLIST, sg_list, sg_cnt, 1230 1335 AP_DEVICE, 0x40, 1231 1336 /* force unit access */ … … 1233 1338 AP_WRITE, 1, 1234 1339 AP_END); 1235 } else { 1340 } 1341 else 1342 { 1236 1343 rc = ata_cmd(ai, p, d, slot, ATA_CMD_WRITE_EXT, 1237 AP_SECTOR_48, (u32) sector, (u16)0,1238 AP_COUNT, (u16)count,1239 AP_SGLIST, sg_list, (u16)sg_cnt,1344 AP_SECTOR_48, sector, 0, 1345 AP_COUNT, count, 1346 AP_SGLIST, sg_list, sg_cnt, 1240 1347 AP_DEVICE, 0x40, 1241 1348 AP_WRITE, 1, 1242 1349 AP_END); 1243 1350 } 1244 1245 } else { 1351 } 1352 else 1353 { 1246 1354 rc = ata_cmd(ai, p, d, slot, ATA_CMD_WRITE, 1247 AP_SECTOR_28, (u32)sector,1248 AP_COUNT, (u16)count & 0xffU,1249 AP_SGLIST, sg_list, (u16)sg_cnt,1355 AP_SECTOR_28, sector, 1356 AP_COUNT, count & 0xffU, 1357 AP_SGLIST, sg_list, sg_cnt, 1250 1358 AP_DEVICE, 0x40, 1251 1359 AP_WRITE, 1, … … 1255 1363 return(rc); 1256 1364 } 1365 1366 /****************************************************************************** 1367 * Copy block from S/G list to virtual address or vice versa. 1368 */ 1369 void sg_memcpy(SCATGATENTRY *sg_list, USHORT sg_cnt, ULONG sg_off, 1370 void *buf, USHORT len, SG_MEMCPY_DIRECTION dir) 1371 { 1372 USHORT i; 1373 USHORT l; 1374 ULONG phys_addr; 1375 ULONG pos = 0; 1376 char *p; 1377 1378 /* walk through S/G list to find the elements involved in the operation */ 1379 for (i = 0; i < sg_cnt && len > 0; i++) 1380 { 1381 if (pos <= sg_off && pos + sg_list[i].XferBufLen > sg_off) 1382 { 1383 /* this S/G element intersects with the block to be copied */ 1384 phys_addr = sg_list[i].ppXferBuf + (sg_off - pos); 1385 if ((l = sg_list[i].XferBufLen - (sg_off - pos)) > len) 1386 { 1387 l = len; 1388 } 1389 1390 if (Dev32Help_PhysToLin(phys_addr, l, (PVOID) &p)) 1391 { 1392 panic("sg_memcpy(): DevHelp_PhysToLin() failed"); 1393 } 1394 if (dir == SG_TO_BUF) 1395 { 1396 memcpy(buf, p, l); 1397 } 1398 else 1399 { 1400 memcpy(p, buf, l); 1401 } 1402 sg_off += l; 1403 buf = (char *) buf + l; 1404 len -= l; 1405 } 1406 1407 pos += sg_list[i].XferBufLen; 1408 } 1409 } 1410 1411 /****************************************************************************** 1412 * Halt processing by submitting an internal error. This is a last resort and 1413 * should only be called when the system state is corrupt. 1414 */ 1415 void panic(char *msg) 1416 { 1417 Dev32Help_InternalError(msg, strlen(msg)); 1418 } 1419 -
trunk/src/os2ahci/ata.h
r125 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 431 432 AP_SECTOR_48, /* [u32, u16] 48-bit sector address */ 432 433 AP_DEVICE, /* [u16] ATA cmd "device" field */ 433 AP_SGLIST, /* [void _far*, u16] buffer S/G (SCATGATENTRY/count) */434 AP_VADDR, /* [void _far*, u16] buffer virtual address (buf/len) */434 AP_SGLIST, /* [void *, u16] buffer S/G (SCATGATENTRY/count) */ 435 AP_VADDR, /* [void *, u16] buffer virtual address (buf/len) */ 435 436 AP_WRITE, /* [u16] if != 0, data is written to device */ 436 437 AP_AHCI_FLAGS, /* [u16] AHCI command header flags */ 437 AP_ATAPI_CMD, /* [void _far*, u16] ATAPI command (CDB) and length */438 AP_ATA_CMD, /* [void _far*] ATA command (fixed len) */438 AP_ATAPI_CMD, /* [void *, u16] ATAPI command (CDB) and length */ 439 AP_ATA_CMD, /* [void *] ATA command (fixed len) */ 439 440 AP_END /* [] end of variable argument list */ 440 441 } ATA_PARM; … … 494 495 extern int v_ata_cmd (AD_INFO *ai, int port, int device, 495 496 int slot, int cmd, va_list va); 496 extern void ata_cmd_to_fis (u8 _far *fis, ATA_CMD _far*cmd,497 extern void ata_cmd_to_fis (u8 *fis, ATA_CMD *cmd, 497 498 int device); 498 extern USHORT ata_get_sg_indx (IORB_EXECUTEIO _far*io);499 extern void ata_max_sg_cnt (IORB_EXECUTEIO _far*io,499 extern USHORT ata_get_sg_indx (IORB_EXECUTEIO *io); 500 extern void ata_max_sg_cnt (IORB_EXECUTEIO *io, 500 501 USHORT sg_indx, USHORT sg_max, 501 USHORT _far*sg_cnt,502 USHORT _far*sector_cnt);503 504 extern int ata_get_geometry (IORBH _far *iorb, int slot);505 extern void ata_get_geometry_pp (IORBH _far *iorb);506 extern int ata_unit_ready (IORBH _far *iorb, int slot);507 extern int ata_read (IORBH _far *iorb, int slot);508 extern int ata_read_unaligned (IORBH _far *iorb, int slot);509 extern void ata_read_pp (IORBH _far *iorb);510 extern int ata_verify (IORBH _far *iorb, int slot);511 extern int ata_write (IORBH _far *iorb, int slot);512 extern int ata_write_unaligned (IORBH _far *iorb, int slot);513 extern void ata_write_pp (IORBH _far *iorb);514 extern int ata_execute_ata (IORBH _far *iorb, int slot);515 extern void ata_execute_ata_pp (IORBH _far *iorb);516 extern int ata_req_sense (IORBH _far *iorb, int slot);502 USHORT *sg_cnt, 503 USHORT *sector_cnt); 504 505 extern int ata_get_geometry(IORBH FAR16DATA *iorb, IORBH *pIorb, int slot); 506 extern void ata_get_geometry_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb); 507 extern int ata_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 508 extern int ata_read(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 509 extern int ata_read_unaligned(IORBH *pIorb, int slot); 510 extern void ata_read_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb); 511 extern int ata_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 512 extern int ata_write(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 513 extern int ata_write_unaligned(IORBH *pIorb, int slot); 514 extern void ata_write_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb); 515 extern int ata_execute_ata(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 516 extern void ata_execute_ata_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb); 517 extern int ata_req_sense(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 517 518 518 519 extern char *ata_dev_name (u16 *id_buf); -
trunk/src/os2ahci/atapi.c
r155 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 38 39 /* -------------------------- function prototypes -------------------------- */ 39 40 40 static void atapi_req_sense_pp (IORBH _far *iorb); 41 static int atapi_pad_cdb (u8 _far *cmd_in, u16 cmd_in_len, 42 u8 _far *cmd_out, u16 _far *cmd_out_len); 41 static void atapi_req_sense_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb); 42 static int atapi_pad_cdb(u8 *cmd_in, u16 cmd_in_len, u8 *cmd_out, u16 *cmd_out_len); 43 43 44 44 /* ------------------------ global/static variables ------------------------ */ … … 49 49 * Get device or media geometry. This function is not expected to be called. 50 50 */ 51 int atapi_get_geometry(IORBH _far *iorb, int slot)52 { 53 dprintf("atapi_get_geometry called\n");54 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);51 int atapi_get_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 52 { 53 DPRINTF(2,"atapi_get_geometry called\n"); 54 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 55 55 return(-1); 56 56 } … … 59 59 * Test whether unit is ready. This function is not expected to be called. 60 60 */ 61 int atapi_unit_ready(IORBH _far *iorb, int slot)62 { 63 dprintf("atapi_unit_ready called\n");64 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);61 int atapi_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 62 { 63 DPRINTF(2,"atapi_unit_ready called\n"); 64 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 65 65 return(-1); 66 66 } … … 69 69 * Read sectors from AHCI device. 70 70 */ 71 int atapi_read(IORBH _far *iorb, int slot) 72 { 73 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb; 71 int atapi_read(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 72 { 73 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 74 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(io->pSGList); 74 75 ATAPI_CDB_12 cdb; 75 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);76 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 76 77 USHORT count = io->BlockCount - io->BlocksXferred; 77 78 USHORT sg_indx; 78 79 USHORT sg_cnt; 79 int p = iorb_unit_port( iorb);80 int d = iorb_unit_device( iorb);80 int p = iorb_unit_port(pIorb); 81 int d = iorb_unit_device(pIorb); 81 82 int rc; 82 83 83 if (io->BlockCount == 0) { 84 if (io->BlockCount == 0) 85 { 84 86 /* NOP; return -1 without error in IORB to indicate success */ 85 87 return(-1); 86 88 } 87 89 88 if (add_workspace(iorb)->unaligned) { 90 if (add_workspace(pIorb)->unaligned) 91 { 89 92 /* unaligned S/G addresses present; need to use double buffers */ 90 return(atapi_read_unaligned( iorb, slot));93 return(atapi_read_unaligned(pIorb, slot)); 91 94 } 92 95 … … 101 104 do { 102 105 /* update sector count (might have been updated due to S/G limitations) */ 103 SET_CDB_32(cdb.trans_len, (u32)count);106 SET_CDB_32(cdb.trans_len, count); 104 107 105 108 /* update S/G count and index */ … … 109 112 /* issue command */ 110 113 rc = ata_cmd(ai, p, d, slot, ATA_CMD_PACKET, 111 AP_ATAPI_CMD, (void _far*) &cdb, sizeof(cdb),112 AP_SGLIST, io->pSGList + sg_indx, (u16)sg_cnt,114 AP_ATAPI_CMD, (void *) &cdb, sizeof(cdb), 115 AP_SGLIST, pSGList + sg_indx, sg_cnt, 113 116 AP_DEVICE, 0x40, 114 117 AP_FEATURES, ATAPI_FEAT_DMA | ATAPI_FEAT_DMA_TO_HOST, 115 118 AP_END); 116 119 117 if (rc > 0) { 120 if (rc > 0) 121 { 118 122 /* couldn't map all S/G elements */ 119 123 ata_max_sg_cnt(io, sg_indx, (USHORT) rc, &sg_cnt, &count); … … 121 125 } while (rc > 0 && sg_cnt > 0); 122 126 123 if (rc == 0) { 124 add_workspace(iorb)->blocks = count; 125 add_workspace(iorb)->ppfunc = ata_read_pp; 126 127 } else if (rc > 0) { 128 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 129 130 } else if (rc == ATA_CMD_UNALIGNED_ADDR) { 127 if (rc == 0) 128 { 129 add_workspace(pIorb)->blocks = count; 130 add_workspace(pIorb)->ppfunc = ata_read_pp; 131 } 132 else if (rc > 0) 133 { 134 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 135 } 136 else if (rc == ATA_CMD_UNALIGNED_ADDR) 137 { 131 138 /* unaligned S/G addresses detected; need to use double buffers */ 132 add_workspace(iorb)->unaligned = 1; 133 return(atapi_read_unaligned(iorb, slot)); 134 135 } else { 136 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 139 add_workspace(pIorb)->unaligned = 1; 140 return(atapi_read_unaligned(pIorb, slot)); 141 } 142 else 143 { 144 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 137 145 } 138 146 … … 146 154 * use a transfer buffer and copy the data manually. 147 155 */ 148 int atapi_read_unaligned(IORBH _far *iorb, int slot)149 { 150 IORB_EXECUTEIO _far *io = (IORB_EXECUTEIO _far *) iorb;151 ADD_WORKSPACE _far *aws = add_workspace(iorb);156 int atapi_read_unaligned(IORBH *pIorb, int slot) 157 { 158 IORB_EXECUTEIO *io = (IORB_EXECUTEIO *)pIorb; 159 ADD_WORKSPACE *aws = add_workspace(pIorb); 152 160 ATAPI_CDB_12 cdb; 153 AD_INFO *ai = ad_infos + iorb_unit_adapter( iorb);154 int p = iorb_unit_port( iorb);155 int d = iorb_unit_device( iorb);161 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 162 int p = iorb_unit_port(pIorb); 163 int d = iorb_unit_device(pIorb); 156 164 int rc; 157 165 … … 166 174 167 175 /* allocate transfer buffer */ 168 if ((aws->buf = malloc(io->BlockSize)) == NULL) { 169 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 176 if ((aws->buf = MemAlloc(io->BlockSize)) == NULL) 177 { 178 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 170 179 return(-1); 171 180 } 172 181 173 182 rc = ata_cmd(ai, p, d, slot, ATA_CMD_PACKET, 174 AP_ATAPI_CMD, (void _far*) &cdb, sizeof(cdb),175 AP_VADDR, (void _far *) aws->buf, (u16)io->BlockSize,183 AP_ATAPI_CMD, (void *) &cdb, sizeof(cdb), 184 AP_VADDR, (void *) aws->buf, io->BlockSize, 176 185 AP_DEVICE, 0x40, 177 186 AP_FEATURES, ATAPI_FEAT_DMA | ATAPI_FEAT_DMA_TO_HOST, 178 187 AP_END); 179 188 180 if (rc == 0) { 181 add_workspace(iorb)->blocks = 1; 182 add_workspace(iorb)->ppfunc = ata_read_pp; 183 184 } else if (rc > 0) { 185 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 186 187 } else { 188 iorb_seterr(iorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 189 if (rc == 0) 190 { 191 add_workspace(pIorb)->blocks = 1; 192 add_workspace(pIorb)->ppfunc = ata_read_pp; 193 194 } 195 else if (rc > 0) 196 { 197 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 198 199 } 200 else 201 { 202 iorb_seterr(pIorb, IOERR_CMD_ADD_SOFTWARE_FAILURE); 189 203 } 190 204 … … 196 210 * to be called. 197 211 */ 198 int atapi_verify(IORBH _far *iorb, int slot)199 { 200 ddprintf("atapi_verify called\n");201 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);212 int atapi_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 213 { 214 DPRINTF(3,"atapi_verify called\n"); 215 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 202 216 return(-1); 203 217 } … … 206 220 * Write sectors to AHCI device. This function is not expected to be called. 207 221 */ 208 int atapi_write(IORBH _far *iorb, int slot)209 { 210 ddprintf("atapi_write called\n");211 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);222 int atapi_write(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 223 { 224 DPRINTF(3,"atapi_write called\n"); 225 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 212 226 return(-1); 213 227 } … … 216 230 * Execute ATAPI command. 217 231 */ 218 int atapi_execute_cdb(IORBH _far *iorb, int slot) 219 { 220 IORB_ADAPTER_PASSTHRU _far *pt = (IORB_ADAPTER_PASSTHRU _far *) iorb; 232 int atapi_execute_cdb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 233 { 234 IORB_ADAPTER_PASSTHRU *pt = (IORB_ADAPTER_PASSTHRU *)pIorb; 235 SCATGATENTRY *pSGList = (SCATGATENTRY*)Far16ToFlat(pt->pSGList); 221 236 int rc; 222 237 u8 cdb[ATAPI_MAX_CDB_LEN]; 223 238 u16 cdb_len; 224 239 225 if (pt->ControllerCmdLen > ATAPI_MAX_CDB_LEN) { 226 iorb_seterr(iorb, IOERR_CMD_SYNTAX); 240 if (pt->ControllerCmdLen > ATAPI_MAX_CDB_LEN) 241 { 242 iorb_seterr(pIorb, IOERR_CMD_SYNTAX); 227 243 return -1; 228 244 } 229 245 /* AHCI requires 12 or 16 byte commands */ 230 atapi_pad_cdb(pt->pControllerCmd, pt->ControllerCmdLen, 231 (u8 _far *) cdb, (u16 _far *) &cdb_len); 232 233 if (cdb[0] == 0x12 || cdb[0] == 0x5a) { 246 atapi_pad_cdb(Far16ToFlat(pt->pControllerCmd), pt->ControllerCmdLen, 247 (u8 *) cdb, (u16 *) &cdb_len); 248 249 if (cdb[0] == 0x12 || cdb[0] == 0x5a) 250 { 234 251 /* somebody sets the direction flag incorrectly for those commands */ 235 252 pt->Flags |= PT_DIRECTION_IN; … … 240 257 * mechanism:" -- Storage Device Driver Reference, Scatter/Gather Lists 241 258 */ 242 rc = ata_cmd(ad_infos + iorb_unit_adapter( iorb), iorb_unit_port(iorb),243 iorb_unit_device( iorb), slot, ATA_CMD_PACKET,244 AP_ATAPI_CMD, (void _far *)cdb, cdb_len,245 AP_SGLIST, p t->pSGList, pt->cSGList,259 rc = ata_cmd(ad_infos + iorb_unit_adapter(pIorb), iorb_unit_port(pIorb), 260 iorb_unit_device(pIorb), slot, ATA_CMD_PACKET, 261 AP_ATAPI_CMD, (void *)cdb, cdb_len, 262 AP_SGLIST, pSGList, pt->cSGList, 246 263 AP_WRITE, !(pt->Flags & PT_DIRECTION_IN), 247 264 AP_FEATURES, ATAPI_FEAT_DMA, … … 249 266 AP_END); 250 267 251 if (rc) { 252 iorb_seterr(iorb, IOERR_DEVICE_NONSPECIFIC); 268 if (rc) 269 { 270 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 253 271 } 254 272 … … 266 284 * 267 285 */ 268 int atapi_req_sense(IORBH _far *iorb, int slot)269 { 270 SCSI_STATUS_BLOCK _far*ssb;271 ADD_WORKSPACE _far *aws = add_workspace(iorb);286 int atapi_req_sense(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot) 287 { 288 SCSI_STATUS_BLOCK *ssb; 289 ADD_WORKSPACE *aws = add_workspace(pIorb); 272 290 int rc; 273 291 u8 cdb[ATAPI_MIN_CDB_LEN]; 274 ATAPI_CDB_6 _far *pcdb = (ATAPI_CDB_6 _far*) cdb;292 ATAPI_CDB_6 *pcdb = (ATAPI_CDB_6 *) cdb; 275 293 size_t sense_buf_len = ATAPI_SENSE_LEN; 276 294 277 dprintf("atapi_req_sense\n"); 278 279 if ((iorb->RequestControl & IORB_REQ_STATUSBLOCK) && 280 iorb->StatusBlockLen >= sizeof(*ssb) && iorb->pStatusBlock != 0) { 295 DPRINTF(2,"atapi_req_sense\n"); 296 297 if ((pIorb->RequestControl & IORB_REQ_STATUSBLOCK) && 298 pIorb->StatusBlockLen >= sizeof(*ssb) && pIorb->pStatusBlock != 0) 299 { 300 ULONG ulTmp; 301 302 ulTmp = (CastFar16ToULONG(vIorb) & 0xffff0000) + pIorb->pStatusBlock; 303 ssb = (SCSI_STATUS_BLOCK *)Far16ToFlat(CastULONGToFar16(ulTmp)); 281 304 282 305 /* don't request sense data if caller asked us not to; the flag 283 306 * STATUS_DISABLE_REQEST_SENSE is not defined in the old DDK we've been 284 307 * using so we'll use the hard-coded value (0x0008) */ 285 ssb = (SCSI_STATUS_BLOCK _far *) (((u32) iorb & 0xffff0000U) + 286 (u16) iorb->pStatusBlock); 287 if (ssb->Flags & 0x0008U) { 288 iorb_seterr(iorb, IOERR_DEVICE_NONSPECIFIC); 308 if (ssb->Flags & 0x0008U) 309 { 310 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 289 311 return(-1); 290 312 } … … 292 314 /* if the sense buffer requested is larger than our default, adjust 293 315 * the length accordingly to satisfy the caller's requirements. */ 294 if (ssb->SenseData != NULL && ssb->ReqSenseLen > sense_buf_len) { 316 if (ssb->SenseData != NULL && ssb->ReqSenseLen > sense_buf_len) 317 { 295 318 sense_buf_len = ssb->ReqSenseLen; 296 319 } … … 298 321 299 322 /* allocate sense buffer in ADD workspace */ 300 if ((aws->buf = malloc(sense_buf_len)) == NULL) { 301 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 323 if ((aws->buf = MemAlloc(sense_buf_len)) == NULL) 324 { 325 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 302 326 return(-1); 303 327 } … … 310 334 311 335 aws->ppfunc = atapi_req_sense_pp; 312 rc = ata_cmd(ad_infos + iorb_unit_adapter( iorb),313 iorb_unit_port( iorb),314 iorb_unit_device( iorb),336 rc = ata_cmd(ad_infos + iorb_unit_adapter(pIorb), 337 iorb_unit_port(pIorb), 338 iorb_unit_device(pIorb), 315 339 slot, 316 340 ATA_CMD_PACKET, 317 AP_ATAPI_CMD, (void _far*)cdb, sizeof(cdb),318 AP_VADDR, (void _far *)aws->buf, sense_buf_len,341 AP_ATAPI_CMD, (void *)cdb, sizeof(cdb), 342 AP_VADDR, (void *)aws->buf, sense_buf_len, 319 343 AP_FEATURES, ATAPI_FEAT_DMA, 320 344 AP_END); 321 345 322 if (rc > 0) { 323 iorb_seterr(iorb, IOERR_CMD_SGLIST_BAD); 324 325 } else if (rc < 0) { 346 if (rc > 0) 347 { 348 iorb_seterr(pIorb, IOERR_CMD_SGLIST_BAD); 349 } 350 else if (rc < 0) 351 { 326 352 /* we failed to get info about an error -> return 327 353 * non specific device error 328 354 */ 329 iorb_seterr( iorb, IOERR_DEVICE_NONSPECIFIC);355 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 330 356 } 331 357 … … 337 363 * data returned and maps sense info to IORB error info. 338 364 */ 339 static void atapi_req_sense_pp(IORBH _far *iorb)340 { 341 SCSI_STATUS_BLOCK _far*ssb;342 ADD_WORKSPACE _far *aws = add_workspace(iorb);365 static void atapi_req_sense_pp(IORBH FAR16DATA *vIorb, IORBH *pIorb) 366 { 367 SCSI_STATUS_BLOCK *ssb; 368 ADD_WORKSPACE *aws = add_workspace(pIorb); 343 369 ATAPI_SENSE_DATA *psd = (ATAPI_SENSE_DATA *) aws->buf; 344 370 345 dphex(psd, sizeof(*psd), "sense buffer:\n"); 346 347 if ((iorb->RequestControl & IORB_REQ_STATUSBLOCK) && 348 iorb->StatusBlockLen >= sizeof(*ssb) && iorb->pStatusBlock != 0) { 371 DHEXDUMP(0,psd, sizeof(*psd), "sense buffer:\n"); 372 373 if ((pIorb->RequestControl & IORB_REQ_STATUSBLOCK) && 374 pIorb->StatusBlockLen >= sizeof(*ssb) && pIorb->pStatusBlock != 0) 375 { 376 ULONG ulTmp; 349 377 350 378 /* copy sense data to IORB */ 351 ssb = (SCSI_STATUS_BLOCK _far *) (((u32) iorb & 0xffff0000U) +352 (u16) iorb->pStatusBlock);379 ulTmp = (CastFar16ToULONG(vIorb) & 0xffff0000) + pIorb->pStatusBlock; 380 ssb = (SCSI_STATUS_BLOCK *)Far16ToFlat(CastULONGToFar16(ulTmp)); 353 381 ssb->AdapterErrorCode = 0; 354 382 ssb->TargetStatus = SCSI_STAT_CHECKCOND; … … 356 384 memset(ssb->AdapterDiagInfo, 0x00, sizeof(ssb->AdapterDiagInfo)); 357 385 358 if (ssb->SenseData != NULL) { 386 if (ssb->SenseData != NULL) 387 { 359 388 memcpy(ssb->SenseData, psd, ssb->ReqSenseLen); 360 389 ssb->Flags |= STATUS_SENSEDATA_VALID; 361 390 } 362 iorb->Status |= IORB_STATUSBLOCK_AVAIL;391 pIorb->Status |= IORB_STATUSBLOCK_AVAIL; 363 392 } 364 393 365 394 /* map sense data to some IOERR_ value */ 366 switch (ATAPI_GET_SENSE(psd)) {367 395 switch (ATAPI_GET_SENSE(psd)) 396 { 368 397 case ASENSE_NO_SENSE: 369 398 case ASENSE_RECOVERED_ERROR: 370 399 /* no error; this shouldn't happen because we'll only call 371 400 * atapi_req_sense() if we received an error interrupt */ 372 iorb_seterr( iorb, IOERR_DEVICE_NONSPECIFIC);401 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 373 402 break; 374 403 375 404 case ASENSE_NOT_READY: 376 iorb_seterr( iorb, IOERR_UNIT_NOT_READY);405 iorb_seterr(pIorb, IOERR_UNIT_NOT_READY); 377 406 break; 378 407 379 408 case ASENSE_UNIT_ATTENTION: 380 iorb_seterr( iorb, IOERR_MEDIA_CHANGED);409 iorb_seterr(pIorb, IOERR_MEDIA_CHANGED); 381 410 break; 382 411 383 412 case ASENSE_MEDIUM_ERROR: 384 iorb_seterr( iorb, IOERR_MEDIA);413 iorb_seterr(pIorb, IOERR_MEDIA); 385 414 break; 386 415 387 416 case ASENSE_ILLEGAL_REQUEST: 388 iorb_seterr( iorb, IOERR_CMD_SYNTAX);417 iorb_seterr(pIorb, IOERR_CMD_SYNTAX); 389 418 break; 390 419 391 420 case ASENSE_DATA_PROTECT: 392 iorb_seterr( iorb, IOERR_MEDIA_WRITE_PROTECT);421 iorb_seterr(pIorb, IOERR_MEDIA_WRITE_PROTECT); 393 422 break; 394 423 395 424 case ASENSE_BLANK_CHECK: 396 iorb_seterr( iorb, IOERR_MEDIA_NOT_FORMATTED);425 iorb_seterr(pIorb, IOERR_MEDIA_NOT_FORMATTED); 397 426 break; 398 427 399 428 case ASENSE_ABORTED_COMMAND: 400 429 case ASENSE_COPY_ABORTED: 401 iorb_seterr( iorb, IOERR_CMD_ABORTED);430 iorb_seterr(pIorb, IOERR_CMD_ABORTED); 402 431 break; 403 432 404 433 default: 405 iorb_seterr( iorb, IOERR_DEVICE_NONSPECIFIC);434 iorb_seterr(pIorb, IOERR_DEVICE_NONSPECIFIC); 406 435 break; 407 436 } … … 418 447 * returns 0 on success, != 0 if the command can't be converted. 419 448 */ 420 int atapi_pad_cdb(u8 _far *cmd_in, u16 cmd_in_len, 421 u8 _far *cmd_out, u16 _far *cmd_out_len) 422 { 423 ATAPI_CDB_12 _far *p12; 449 int atapi_pad_cdb(u8 *cmd_in, u16 cmd_in_len, u8 *cmd_out, u16 *cmd_out_len) 450 { 451 ATAPI_CDB_12 *p12; 424 452 u32 tmp; 425 453 426 if (cmd_in_len == ATAPI_MIN_CDB_LEN || cmd_in_len == ATAPI_MAX_CDB_LEN) { 454 if (cmd_in_len == ATAPI_MIN_CDB_LEN || cmd_in_len == ATAPI_MAX_CDB_LEN) 455 { 427 456 /* command does not need to be converted */ 428 457 memcpy(cmd_out, cmd_in, cmd_in_len); … … 432 461 433 462 memset(cmd_out, 0x00, ATAPI_MAX_CDB_LEN); 434 p12 = (ATAPI_CDB_12 _far*) cmd_out;463 p12 = (ATAPI_CDB_12 *) cmd_out; 435 464 /* we always convert to 12 byte CDBs */ 436 465 *cmd_out_len = ATAPI_MIN_CDB_LEN; 437 466 438 467 /* check if command can be converted */ 439 switch (cmd_in[0]) {440 468 switch (cmd_in[0]) 469 { 441 470 case ATAPI_CMD_READ_6: 442 471 case ATAPI_CMD_WRITE_6: … … 446 475 tmp = GET_CDB_24(cmd_in + 1) & 0x1fffffUL; 447 476 SET_CDB_32(p12->lba, tmp); 448 SET_CDB_32(p12->trans_len, ( u32)(cmd_in[4]));477 SET_CDB_32(p12->trans_len, (cmd_in[4])); 449 478 p12->control = cmd_in[5]; 450 479 break; -
trunk/src/os2ahci/atapi.h
r112 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 166 167 /* -------------------------- function prototypes -------------------------- */ 167 168 168 extern int atapi_get_geometry (IORBH _far *iorb, int slot);169 extern int atapi_unit_ready (IORBH _far *iorb, int slot);170 extern int atapi_read (IORBH _far *iorb, int slot);171 extern int atapi_read_unaligned (IORBH _far *iorb, int slot);172 extern int atapi_verify (IORBH _far *iorb, int slot);173 extern int atapi_write (IORBH _far *iorb, int slot);174 extern int atapi_execute_cdb (IORBH _far *iorb, int slot);175 extern int atapi_req_sense (IORBH _far *iorb, int slot);169 extern int atapi_get_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 170 extern int atapi_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 171 extern int atapi_read(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 172 extern int atapi_read_unaligned(IORBH *pIorb, int slot); 173 extern int atapi_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 174 extern int atapi_write(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 175 extern int atapi_execute_cdb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 176 extern int atapi_req_sense(IORBH FAR16DATA *vIorb, IORBH *pIorb, int slot); 176 177 -
trunk/src/os2ahci/ctxhook.c
r161 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 87 88 * in the interrupt and error handlers. 88 89 */ 89 void restart_ctxhook(ULONG parm)90 void _Syscall restart_ctxhook(ULONG parm) 90 91 { 91 92 IORB_QUEUE done_queue; 92 93 AD_INFO *ai; 93 IORBH _far *problem_iorb;94 IORBH _far *iorb;95 IORBH _far *next = NULL;96 u8 _far*port_mmio;94 IORBH FAR16DATA *vProblemIorb; 95 IORBH FAR16DATA *vIorb; 96 IORBH FAR16DATA *vNext = NULL; 97 u8 *port_mmio; 97 98 int rearm_ctx_hook = 0; 98 99 int need_reset; … … 101 102 int p; 102 103 103 dprintf("restart_ctxhook() started\n");104 DPRINTF(2,"restart_ctxhook() started\n"); 104 105 memset(&done_queue, 0x00, sizeof(done_queue)); 105 106 106 107 spin_lock(drv_lock); 107 108 108 for (a = 0; a < ad_info_cnt; a++) { 109 for (a = 0; a < ad_info_cnt; a++) 110 { 109 111 ai = ad_infos + a; 110 112 111 if (ai->busy) { 113 if (ai->busy) 114 { 112 115 /* this adapter is busy; leave it alone for now */ 113 116 rearm_ctx_hook = 1; … … 115 118 } 116 119 117 for (p = 0; p <= ai->port_max; p++) { 118 if (ports_to_restart[a] & (1UL << p)) { 120 for (p = 0; p <= ai->port_max; p++) 121 { 122 if (ports_to_restart[a] & (1UL << p)) 123 { 119 124 ports_to_restart[a] &= ~(1UL << p); 120 125 121 126 /* restart this port */ 122 127 port_mmio = port_base(ai, p); 123 problem_iorb = NULL;128 vProblemIorb = NULL; 124 129 need_reset = 0; 125 130 126 dprintf("port %d, TF_DATA: 0x%lx\n", p, readl(port_mmio + PORT_TFDATA));131 DPRINTF(2,"port %d, TF_DATA: 0x%x\n", p, readl(port_mmio + PORT_TFDATA)); 127 132 128 133 /* get "current command slot"; only valid if there are no NCQ cmds */ 129 134 ccs = (int) ((readl(port_mmio + PORT_CMD) >> 8) & 0x1f); 130 ddprintf(" PORT_CMD = 0x%x\n", ccs); 131 132 for (iorb = ai->ports[p].iorb_queue.root; iorb != NULL; iorb = next) { 133 ADD_WORKSPACE _far *aws = add_workspace(iorb); 134 next = iorb->pNxtIORB; 135 136 if (aws->queued_hw) { 137 if (ai->ports[p].ncq_cmds & (1UL << aws->cmd_slot)) { 135 DPRINTF(3," PORT_CMD = 0x%x\n", ccs); 136 137 for (vIorb = ai->ports[p].iorb_queue.vRoot; vIorb != NULL; vIorb = vNext) 138 { 139 IORBH *pIorb = Far16ToFlat(vIorb); 140 ADD_WORKSPACE *aws = add_workspace(pIorb); 141 vNext = pIorb->pNxtIORB; 142 143 if (aws->queued_hw) 144 { 145 if (ai->ports[p].ncq_cmds & (1UL << aws->cmd_slot)) 146 { 138 147 /* NCQ command; force non-NCQ mode and trigger port reset */ 139 148 ai->ports[p].ncq_cmds &= ~(1UL << aws->cmd_slot); 140 149 aws->no_ncq = 1; 141 150 need_reset = 1; 142 } else { 151 } 152 else 153 { 143 154 /* regular command; clear cmd bit and identify problem IORB */ 144 155 ai->ports[p].reg_cmds &= ~(1UL << aws->cmd_slot); 145 if (aws->cmd_slot == ccs) { 156 if (aws->cmd_slot == ccs) 157 { 146 158 /* this is the non-NCQ command that failed */ 147 ddprintf("failing IORB: %Fp\n", iorb);148 problem_iorb = iorb;159 DPRINTF(0,"failing IORB: %x\n", vIorb); 160 vProblemIorb = vIorb; 149 161 } 150 162 } 151 163 /* we can requeue all IORBs unconditionally (see function comment) */ 152 if (aws->retries++ < MAX_RETRIES) { 153 iorb_requeue(iorb); 154 155 } else { 164 if (aws->retries++ < MAX_RETRIES) 165 { 166 iorb_requeue(pIorb); 167 } 168 else 169 { 156 170 /* retry count exceeded; consider IORB aborted */ 157 iorb_seterr(iorb, IOERR_CMD_ABORTED); 158 iorb_queue_del(&ai->ports[p].iorb_queue, iorb); 159 iorb_queue_add(&done_queue, iorb); 160 if (iorb == problem_iorb) { 171 iorb_seterr(pIorb, IOERR_CMD_ABORTED); 172 iorb_queue_del(&ai->ports[p].iorb_queue, vIorb); 173 iorb_queue_add(&done_queue, vIorb, pIorb); 174 if (vIorb == vProblemIorb) 175 { 161 176 /* no further analysis -- we're done with this one */ 162 problem_iorb = NULL;177 vProblemIorb = NULL; 163 178 } 164 179 } … … 167 182 168 183 /* sanity check: issued command bitmaps should be 0 now */ 169 if (ai->ports[p].ncq_cmds != 0 || ai->ports[p].reg_cmds != 0) { 170 dprintf("warning: commands issued not 0 (%08lx/%08lx); resetting...\n", 184 if (ai->ports[p].ncq_cmds != 0 || ai->ports[p].reg_cmds != 0) 185 { 186 DPRINTF(0,"warning: commands issued not 0 (%08lx/%08lx); resetting...\n", 171 187 ai->ports[p].ncq_cmds, ai->ports[p].reg_cmds); 172 188 need_reset = 1; 173 189 } 174 190 175 if (!need_reset) { 176 if ((readl(port_mmio + PORT_TFDATA) & 0x88) != 0) { 191 if (!need_reset) 192 { 193 if ((readl(port_mmio + PORT_TFDATA) & 0x88) != 0) 194 { 177 195 /* device is not in an idle state */ 178 196 need_reset = 1; … … 183 201 ai->busy = 1; 184 202 spin_unlock(drv_lock); 185 if (need_reset) { 203 if (need_reset) 204 { 186 205 ahci_reset_port(ai, p, 1); 187 } else { 206 } 207 else 208 { 188 209 ahci_stop_port(ai, p); 189 210 ahci_start_port(ai, p, 1); … … 197 218 ai->ports[p].cmd_slot = 0; 198 219 199 if (problem_iorb != NULL) { 220 if (vProblemIorb != NULL) 221 { 222 IORBH *pProblemIorb = Far16ToFlat(vProblemIorb); 200 223 /* get details about the error that caused this IORB to fail */ 201 if (need_reset) { 224 if (need_reset) 225 { 202 226 /* no way to retrieve error details after a reset */ 203 iorb_seterr(problem_iorb, IOERR_DEVICE_NONSPECIFIC); 204 iorb_queue_del(&ai->ports[p].iorb_queue, problem_iorb); 205 iorb_queue_add(&done_queue, problem_iorb); 206 207 } else { 227 iorb_seterr(pProblemIorb, IOERR_DEVICE_NONSPECIFIC); 228 iorb_queue_del(&ai->ports[p].iorb_queue, vProblemIorb); 229 iorb_queue_add(&done_queue, vProblemIorb, pProblemIorb); 230 231 } 232 else 233 { 208 234 /* get sense information */ 209 ADD_WORKSPACE _far *aws = add_workspace(problem_iorb);210 int d = iorb_unit_device(p roblem_iorb);211 int (*req_sense)(IORBH _far*, int) = (ai->ports[p].devs[d].atapi) ?235 ADD_WORKSPACE *aws = add_workspace(pProblemIorb); 236 int d = iorb_unit_device(pProblemIorb); 237 int (*req_sense)(IORBH FAR16DATA *, IORBH *, int) = (ai->ports[p].devs[d].atapi) ? 212 238 atapi_req_sense : ata_req_sense; 213 239 … … 215 241 aws->queued_hw = 1; 216 242 217 if (req_sense(problem_iorb, 0) == 0) { 243 if (req_sense(vProblemIorb, pProblemIorb, 0) == 0) 244 { 218 245 /* execute request sense on slot #0 before anything else comes along */ 219 ADD_StartTimerMS(&aws->timer, 5000, (PFN) timeout_callback, 220 problem_iorb, 0); 246 Timer_StartTimerMS(&aws->timer, 5000, timeout_callback, CastFar16ToULONG(vProblemIorb)); 221 247 aws->cmd_slot = 0; 222 248 ai->ports[p].reg_cmds = 1; … … 224 250 readl(port_mmio); /* flush */ 225 251 226 } else { 252 } 253 else 254 { 227 255 /* IORB is expected to contain the error code; just move to done queue */ 228 iorb_queue_del(&ai->ports[p].iorb_queue, problem_iorb);229 iorb_queue_add(&done_queue, problem_iorb);256 iorb_queue_del(&ai->ports[p].iorb_queue, vProblemIorb); 257 iorb_queue_add(&done_queue, vProblemIorb, pProblemIorb); 230 258 } 231 259 } … … 238 266 239 267 /* call notification routine on all IORBs which have completed */ 240 for (iorb = done_queue.root; iorb != NULL; iorb = next) { 241 next = iorb->pNxtIORB; 268 for (vIorb = done_queue.vRoot; vIorb != NULL; vIorb = vNext) 269 { 270 IORBH *pIorb = Far16ToFlat(vIorb); 271 vNext = pIorb->pNxtIORB; 242 272 243 273 spin_lock(drv_lock); 244 aws_free(add_workspace( iorb));274 aws_free(add_workspace(pIorb)); 245 275 spin_unlock(drv_lock); 246 276 247 iorb_complete( iorb);277 iorb_complete(vIorb, pIorb); 248 278 } 249 279 … … 253 283 spin_unlock(drv_lock); 254 284 255 dprintf("restart_ctxhook() completed\n");285 DPRINTF(2,"restart_ctxhook() completed\n"); 256 286 257 287 /* Check whether we have to rearm ourselves because some adapters were busy 258 288 * when we wanted to restart ports on them. 259 289 */ 260 if (rearm_ctx_hook) { 290 if (rearm_ctx_hook) 291 { 261 292 msleep(250); 262 DevHelp_ArmCtxHook(0, restart_ctxhook_h);293 KernArmHook(restart_ctxhook_h, 0, 0); 263 294 } 264 295 } … … 290 321 * the upstream code might reuse the IORBs before we're done with them. 291 322 */ 292 void reset_ctxhook(ULONG parm)323 void _Syscall reset_ctxhook(ULONG parm) 293 324 { 294 325 IORB_QUEUE done_queue; 295 326 AD_INFO *ai; 296 IORBH _far *iorb;297 IORBH _far *next = NULL;327 IORBH FAR16DATA *vIorb; 328 IORBH FAR16DATA *vNext = NULL; 298 329 int rearm_ctx_hook = 0; 299 330 int a; 300 331 int p; 301 332 302 dprintf("reset_ctxhook() started\n");333 DPRINTF(2,"reset_ctxhook() started\n"); 303 334 memset(&done_queue, 0x00, sizeof(done_queue)); 304 335 305 336 spin_lock(drv_lock); 306 337 307 if (th_reset_watchdog != 0) { 338 if (th_reset_watchdog != 0) 339 { 308 340 /* watchdog timer still active -- just reset it */ 309 ADD_CancelTimer(th_reset_watchdog);341 Timer_CancelTimer(th_reset_watchdog); 310 342 th_reset_watchdog = 0; 311 343 } 312 344 313 345 /* add ports of active IORBs from the abort queue to ports_to_reset[] */ 314 for (iorb = abort_queue.root; iorb != NULL; iorb = next) { 315 next = iorb->pNxtIORB; 316 a = iorb_unit_adapter(iorb); 317 p = iorb_unit_port(iorb); 346 for (vIorb = abort_queue.vRoot; vIorb != NULL; vIorb = vNext) 347 { 348 IORBH *pIorb = Far16ToFlat(vIorb); 349 vNext = pIorb->pNxtIORB; 350 a = iorb_unit_adapter(pIorb); 351 p = iorb_unit_port(pIorb); 318 352 ai = ad_infos + a; 319 353 320 if (ai->busy) { 354 if (ai->busy) 355 { 321 356 /* this adapter is busy; leave it alone for now */ 322 357 rearm_ctx_hook = 1; … … 325 360 326 361 /* move IORB to the local 'done' queue */ 327 iorb_queue_del(&abort_queue, iorb);328 iorb_queue_add(&done_queue, iorb);362 iorb_queue_del(&abort_queue, vIorb); 363 iorb_queue_add(&done_queue, vIorb, pIorb); 329 364 330 365 /* reset port if the IORB has already been queued to hardware */ 331 if (add_workspace(iorb)->queued_hw) { 366 if (add_workspace(pIorb)->queued_hw) 367 { 332 368 /* prepare port reset */ 333 369 ports_to_reset[a] |= (1UL << p); … … 336 372 337 373 /* reset all ports in 'ports_to_reset[]' */ 338 for (a = 0; a < ad_info_cnt; a++) { 374 for (a = 0; a < ad_info_cnt; a++) 375 { 339 376 ai = ad_infos + a; 340 377 341 if (ai->busy) { 378 if (ai->busy) 379 { 342 380 /* this adapter is busy; leave it alone for now */ 343 381 rearm_ctx_hook = 1; … … 345 383 } 346 384 347 for (p = 0; p <= ai->port_max; p++) { 348 if (ports_to_reset[a] & (1UL << p)) { 385 for (p = 0; p <= ai->port_max; p++) 386 { 387 if (ports_to_reset[a] & (1UL << p)) 388 { 349 389 ports_to_reset[a] &= ~(1UL << p); 350 390 … … 366 406 367 407 /* retry or abort all remaining active commands on this port */ 368 for (iorb = ai->ports[p].iorb_queue.root; iorb != NULL; iorb = next) { 369 ADD_WORKSPACE _far *aws = add_workspace(iorb); 370 next = iorb->pNxtIORB; 371 372 if (aws->queued_hw) { 408 for (vIorb = ai->ports[p].iorb_queue.vRoot; vIorb != NULL; vIorb = vNext) 409 { 410 IORBH *pIorb = Far16ToFlat(vIorb); 411 ADD_WORKSPACE *aws = add_workspace(pIorb); 412 vNext = pIorb->pNxtIORB; 413 414 if (aws->queued_hw) 415 { 373 416 /* this IORB had already been queued to HW when we reset the port */ 374 if (aws->idempotent && aws->retries++ < MAX_RETRIES) { 417 if (aws->idempotent && aws->retries++ < MAX_RETRIES) 418 { 375 419 /* we can retry this IORB */ 376 iorb_requeue(iorb); 377 378 } else { 420 iorb_requeue(pIorb); 421 422 } 423 else 424 { 379 425 /* we cannot retry this IORB; consider it aborted */ 380 iorb->ErrorCode = IOERR_CMD_ABORTED;381 iorb_queue_del(&ai->ports[p].iorb_queue, iorb);382 iorb_queue_add(&done_queue, iorb);426 pIorb->ErrorCode = IOERR_CMD_ABORTED; 427 iorb_queue_del(&ai->ports[p].iorb_queue, vIorb); 428 iorb_queue_add(&done_queue, vIorb, pIorb); 383 429 } 384 430 } … … 391 437 392 438 /* complete all aborted IORBs */ 393 for (iorb = done_queue.root; iorb != NULL; iorb = next) { 394 next = iorb->pNxtIORB; 439 for (vIorb = done_queue.vRoot; vIorb != NULL; vIorb = vNext) 440 { 441 IORBH *pIorb = Far16ToFlat(vIorb); 442 vNext = pIorb->pNxtIORB; 395 443 396 444 spin_lock(drv_lock); 397 aws_free(add_workspace( iorb));445 aws_free(add_workspace(pIorb)); 398 446 spin_unlock(drv_lock); 399 447 400 iorb->Status |= IORB_ERROR;401 iorb_complete( iorb);448 pIorb->Status |= IORB_ERROR; 449 iorb_complete(vIorb, pIorb); 402 450 } 403 451 … … 407 455 spin_unlock(drv_lock); 408 456 409 dprintf("reset_ctxhook() completed\n");457 DPRINTF(2,"reset_ctxhook() completed\n"); 410 458 411 459 /* Check whether we have to rearm ourselves because some adapters were busy 412 460 * when we wanted to reset ports on them. 413 461 */ 414 if (rearm_ctx_hook) { 462 if (rearm_ctx_hook) 463 { 415 464 msleep(250); 416 DevHelp_ArmCtxHook(0, reset_ctxhook_h);465 KernArmHook(reset_ctxhook_h, 0, 0); 417 466 } 418 467 } … … 424 473 * busy system. Either way, this requires some task-time help. 425 474 */ 426 void engine_ctxhook(ULONG parm)475 void _Syscall engine_ctxhook(ULONG parm) 427 476 { 428 477 int iorbs_sent; 429 478 int i; 430 479 431 dprintf("engine_ctxhook() started\n"); 432 if (resume_sleep_flag) { 480 DPRINTF(2,"engine_ctxhook() started\n"); 481 if (resume_sleep_flag) 482 { 433 483 msleep(resume_sleep_flag); 434 484 resume_sleep_flag = 0; … … 436 486 437 487 spin_lock(drv_lock); 438 for (i = 0; i < 10; i++) { 439 if ((iorbs_sent = trigger_engine_1()) == 0) { 440 break; 441 } 488 for (i = 0; i < 10; i++) 489 { 490 if ((iorbs_sent = trigger_engine_1()) == 0) break; 442 491 } 443 492 spin_unlock(drv_lock); 444 493 445 dprintf("engine_ctxhook() completed\n"); 446 447 if (iorbs_sent != 0) { 494 DPRINTF(2,"engine_ctxhook() completed\n"); 495 496 if (iorbs_sent != 0) 497 { 448 498 /* need to rearm ourselves for another run */ 449 499 msleep(250); 450 DevHelp_ArmCtxHook(0, engine_ctxhook_h);500 KernArmHook(engine_ctxhook_h, 0, 0); 451 501 } 452 502 } -
trunk/src/os2ahci/ioctl.c
r165 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 54 55 SCATGATENTRY sg_lst[AHCI_MAX_SG / 2]; /* scatter/gather list */ 55 56 ULONG sg_cnt; /* number of S/G elements */ 56 UCHAR lh[16]; /* lock handle for VMLock() */57 struct _KernVMLock_t *lh; /* lock handle for VMLock() */ 57 58 } IOCTL_CONTEXT; 58 59 … … 60 61 61 62 static USHORT do_smart (BYTE unit, BYTE sub_func, BYTE cnt, BYTE lba_l, 62 void _far*buf);63 static int map_unit (BYTE unit, USHORT _far *a, USHORT _far*p,64 USHORT _far*d);65 static LIN lin (void _far *p); 66 67 IORBH _far * _far _cdecl ioctl_wakeup(IORBH _far *iorb);63 void *buf); 64 static int map_unit (BYTE unit, USHORT *a, USHORT *p, 65 USHORT *d); 66 67 IORBH *IoctlWakeup(ULONG ulArg); 68 extern IORBH FAR16DATA * __far16 IoctlWakeup16(IORBH FAR16DATA*); 68 69 69 70 /* ------------------------ global/static variables ------------------------ */ … … 75 76 * adapter/port/device combinations are available. 76 77 */ 77 USHORT ioctl_get_devlist(R P_GENIOCTL _far*ioctl)78 { 79 OS2AHCI_DEVLIST _far *devlst = (OS2AHCI_DEVLIST _far *) ioctl->DataPacket;78 USHORT ioctl_get_devlist(REQPACKET *ioctl) 79 { 80 OS2AHCI_DEVLIST *devlst = (OS2AHCI_DEVLIST*)Far16ToFlat(ioctl->ioctl.pvData); 80 81 USHORT maxcnt = 0; 81 82 USHORT cnt = 0; … … 84 85 USHORT d; 85 86 86 /* verify addressability of parm buffer (number of devlst elements) */ 87 if (DevHelp_VerifyAccess((SEL) ((ULONG) ioctl->ParmPacket >> 16), 88 sizeof(USHORT), 89 (USHORT) (ULONG) ioctl->ParmPacket, 90 VERIFY_READONLY) != 0) { 91 return(STDON | STERR | 0x05); 92 } 93 94 maxcnt = *((USHORT _far *) ioctl->ParmPacket); 95 96 /* verify addressability of return buffer (OS2AHCI_DEVLIST) */ 97 if (DevHelp_VerifyAccess((SEL) ((ULONG) devlst >> 16), 98 offsetof(OS2AHCI_DEVLIST, devs) + 99 sizeof(devlst->devs) * maxcnt, 100 (USHORT) (ULONG) devlst, 101 VERIFY_READWRITE) != 0) { 102 return(STDON | STERR | 0x05); 87 if (KernCopyIn(&maxcnt, ioctl->ioctl.pvParm, sizeof(maxcnt))) 88 { 89 return(RPDONE | RPERR_PARAMETER); 103 90 } 104 91 105 92 /* fill-in device list */ 106 for (a = 0; a < ad_info_cnt; a++) { 93 for (a = 0; a < ad_info_cnt; a++) 94 { 107 95 AD_INFO *ai = ad_infos + a; 108 96 109 for (p = 0; p <= ai->port_max; p++) { 97 for (p = 0; p <= ai->port_max; p++) 98 { 110 99 P_INFO *pi = ai->ports + p; 111 100 112 for (d = 0; d <= pi->dev_max; d++) { 113 if (pi->devs[d].present) { 101 for (d = 0; d <= pi->dev_max; d++) 102 { 103 if (pi->devs[d].present) 104 { 114 105 /* add this device to the device list */ 115 if (cnt >= maxcnt) { 106 if (cnt >= maxcnt) 107 { 116 108 /* not enough room in devlst */ 117 109 goto ioctl_get_device_done; … … 136 128 ioctl_get_device_done: 137 129 devlst->cnt = cnt; 138 return( STDON);130 return(RPDONE); 139 131 } 140 132 … … 143 135 * requests. 144 136 */ 145 USHORT ioctl_passthrough(R P_GENIOCTL _far*ioctl)146 { 147 OS2AHCI_PASSTHROUGH _far *req = (OS2AHCI_PASSTHROUGH _far *) ioctl->ParmPacket;148 char _far *sense_buf = (char _far *) ioctl->DataPacket;137 USHORT ioctl_passthrough(REQPACKET *ioctl) 138 { 139 OS2AHCI_PASSTHROUGH *req = (OS2AHCI_PASSTHROUGH *)Far16ToFlat(ioctl->ioctl.pvParm); 140 char *sense_buf = (char *)Far16ToFlat(ioctl->ioctl.pvData); 149 141 IOCTL_CONTEXT *ic; 150 142 USHORT ret; … … 152 144 USHORT p; 153 145 USHORT d; 154 155 /* verify addressability of parm buffer (OS2AHCI_PASSTHROUGH) */156 if (DevHelp_VerifyAccess((SEL) ((ULONG) req >> 16),157 sizeof(OS2AHCI_PASSTHROUGH),158 (USHORT) (ULONG) req,159 VERIFY_READWRITE) != 0) {160 return(STDON | STERR | 0x05);161 }162 163 /* verify addressability of data buffer (sense data) */164 if (req->sense_len > 0) {165 if (DevHelp_VerifyAccess((SEL) ((ULONG) sense_buf >> 16),166 req->sense_len,167 (USHORT) (ULONG) sense_buf,168 VERIFY_READWRITE) != 0) {169 return(STDON | STERR | 0x05);170 }171 }172 146 173 147 /* Verify basic request parameters such as adapter/port/device, size of … … 178 152 d = req->device; 179 153 if (a >= ad_info_cnt || p > ad_infos[a].port_max || 180 d > ad_infos[a].ports[p].dev_max || !ad_infos[a].ports[p].devs[d].present) { 181 return(STDON | STERR | ERROR_I24_BAD_UNIT); 154 d > ad_infos[a].ports[p].dev_max || !ad_infos[a].ports[p].devs[d].present) 155 { 156 return(RPDONE | RPERR_UNIT); 182 157 } 183 158 if ((req->buflen + 4095) / 4096 + 1 > AHCI_MAX_SG / 2 || 184 req->cmdlen < 6 || req->cmdlen > sizeof(req->cmd)) { 185 return(STDON | STERR | ERROR_I24_INVALID_PARAMETER); 159 req->cmdlen < 6 || req->cmdlen > sizeof(req->cmd)) 160 { 161 return(RPDONE | RPERR_PARAMETER); 186 162 } 187 163 188 164 /* allocate IOCTL context data */ 189 if ((ic = malloc(sizeof(*ic))) == NULL) { 190 return(STDON | STERR | ERROR_I24_GEN_FAILURE); 165 if ((ic = MemAlloc(sizeof(*ic))) == NULL) 166 { 167 return(RPDONE | RPERR_GENERAL); 191 168 } 192 169 memset(ic, 0x00, sizeof(*ic)); 193 194 /* lock DMA transfer buffer into memory and construct S/G list */195 if (req->buflen > 0) {196 if (DevHelp_VMLock(VMDHL_LONG | !((req->flags & PT_WRITE) ? VMDHL_WRITE : 0),197 req->buf, req->buflen, lin(ic->sg_lst), lin(&ic->lh),198 &ic->sg_cnt) != 0) {199 /* couldn't lock buffer and/or produce a S/G list */200 free(ic);201 return(STDON | STERR | ERROR_I24_INVALID_PARAMETER);202 }203 }204 170 205 171 /* fill in adapter passthrough fields */ … … 210 176 ic->iorb.iorbh.RequestControl = IORB_ASYNC_POST; 211 177 ic->iorb.iorbh.Timeout = req->timeout; 212 ic->iorb.iorbh.NotifyAddress = ioctl_wakeup;213 214 ic->iorb.cSGList 215 ic->iorb.pSGList = ic->sg_lst;216 ic->iorb.ppSGLIST = virt_to_phys(ic->sg_lst);178 ic->iorb.iorbh.NotifyAddress = IoctlWakeup16; 179 180 ic->iorb.cSGList = ic->sg_cnt; 181 ic->iorb.pSGList = MemFar16Adr(ic->sg_lst); 182 ic->iorb.ppSGLIST = MemPhysAdr(ic->sg_lst); 217 183 218 184 memcpy(ic->cmd, req->cmd.cdb, sizeof(ic->cmd)); 219 185 ic->iorb.ControllerCmdLen = req->cmdlen; 220 ic->iorb.pControllerCmd = ic->cmd;186 ic->iorb.pControllerCmd = MemFar16Adr(ic->cmd); 221 187 ic->iorb.Flags = (req->flags & PT_WRITE) ? 0 : PT_DIRECTION_IN; 222 188 223 if (req->sense_len > 0) { 189 if (req->sense_len > 0) 190 { 224 191 /* initialize SCSI status block to allow getting sense data */ 225 ic->iorb.iorbh.pStatusBlock = (BYTE *) &ic->ssb;192 ic->iorb.iorbh.pStatusBlock = CastFar16ToULONG(MemFar16Adr(&ic->ssb)); 226 193 ic->iorb.iorbh.StatusBlockLen = sizeof(ic->ssb); 227 ic->ssb.SenseData = (SCSI_REQSENSE_DATA _far*) ic->sense;194 ic->ssb.SenseData = (SCSI_REQSENSE_DATA *) ic->sense; 228 195 ic->ssb.ReqSenseLen = sizeof(ic->sense); 229 196 ic->iorb.iorbh.RequestControl |= IORB_REQ_STATUSBLOCK; … … 231 198 232 199 /* send IORB on its way */ 233 add_entry( &ic->iorb.iorbh);200 add_entry(MemFar16Adr(&ic->iorb.iorbh)); 234 201 235 202 /* Wait for IORB completion. */ 236 203 spin_lock(drv_lock); 237 while (!(ic->iorb.iorbh.Status & IORB_DONE)) { 238 DevHelp_ProcBlock((ULONG) (void _far *) &ic->iorb.iorbh, 30000, 1); 204 while (!(ic->iorb.iorbh.Status & IORB_DONE)) 205 { 206 KernBlock((ULONG)&ic->iorb.iorbh, 30000, 0, NULL, NULL); 239 207 } 240 208 spin_unlock(drv_lock); 241 209 242 ret = STDON;210 ret = RPDONE; 243 211 244 212 /* map IORB error codes to device driver error codes */ 245 if (ic->iorb.iorbh.Status & IORB_ERROR) { 246 ret |= STERR; 247 248 switch (ic->iorb.iorbh.ErrorCode) { 249 213 if (ic->iorb.iorbh.Status & IORB_ERROR) 214 { 215 ret |= RPERR; 216 217 switch (ic->iorb.iorbh.ErrorCode) 218 { 250 219 case IOERR_UNIT_NOT_READY: 251 ret |= ERROR_I24_NOT_READY;220 ret |= RPERR_NOTREADY; 252 221 break; 253 222 254 223 case IOERR_MEDIA_CHANGED: 255 ret |= ERROR_I24_DISK_CHANGE;224 ret |= RPERR_DISK; 256 225 break; 257 226 258 227 case IOERR_MEDIA: 259 228 case IOERR_MEDIA_NOT_FORMATTED: 260 ret |= ERROR_I24_CRC;229 ret |= RPERR_CRC; 261 230 break; 262 231 263 232 case IOERR_CMD_SYNTAX: 264 233 case IOERR_CMD_NOT_SUPPORTED: 265 ret |= ERROR_I24_BAD_COMMAND;234 ret |= RPERR_BADCOMMAND; 266 235 break; 267 236 268 237 case IOERR_MEDIA_WRITE_PROTECT: 269 ret |= ERROR_I24_WRITE_PROTECT;238 ret |= RPERR_PROTECT; 270 239 break; 271 240 272 241 case IOERR_CMD_ABORTED: 273 ret |= ERROR_I24_CHAR_CALL_INTERRUPTED;242 ret |= RPERR_INTERRUPTED; 274 243 break; 275 244 276 245 case IOERR_RBA_ADDRESSING_ERROR: 277 ret |= ERROR_I24_SEEK;246 ret |= RPERR_SEEK; 278 247 break; 279 248 280 249 case IOERR_RBA_LIMIT: 281 ret |= ERROR_I24_SECTOR_NOT_FOUND;250 ret |= RPERR_SECTOR; 282 251 break; 283 252 284 253 case IOERR_CMD_SGLIST_BAD: 285 ret |= ERROR_I24_INVALID_PARAMETER;254 ret |= RPERR_PARAMETER; 286 255 break; 287 256 … … 292 261 case IOERR_CMD_SW_RESOURCE: 293 262 default: 294 ret |= ERROR_I24_GEN_FAILURE;263 ret |= RPERR_GENERAL; 295 264 break; 296 265 } … … 298 267 /* copy sense information, if there is any */ 299 268 if ((ic->iorb.iorbh.Status & IORB_STATUSBLOCK_AVAIL) && 300 (ic->ssb.Flags | STATUS_SENSEDATA_VALID)) {301 memcpy(sense_buf, ic->ssb.SenseData,302 269 (ic->ssb.Flags | STATUS_SENSEDATA_VALID)) 270 { 271 memcpy(sense_buf, ic->ssb.SenseData, min(ic->ssb.ReqSenseLen, req->sense_len)); 303 272 } 304 273 305 } else if ((req->flags & PT_ATAPI) == 0) { 274 } 275 else if ((req->flags & PT_ATAPI) == 0) 276 { 306 277 /* Copy ATA cmd back to IOCTL request (ATA commands are effectively 307 278 * registers which are sometimes used to indicate return conditions, … … 311 282 } 312 283 313 free(ic); 314 if (req->buflen > 0) { 315 DevHelp_VMUnLock(lin(ic->lh)); 316 } 284 MemFree(ic); 317 285 return(ret); 318 286 } … … 325 293 * point, basically those calls required to get HDMON working. 326 294 */ 327 USHORT ioctl_gen_dsk(R P_GENIOCTL _far*ioctl)328 { 329 DSKSP_CommandParameters _far *cp = (DSKSP_CommandParameters _far *) ioctl->ParmPacket;330 UnitInformationData _far*ui;295 USHORT ioctl_gen_dsk(REQPACKET *ioctl) 296 { 297 DSKSP_CommandParameters *cp = (DSKSP_CommandParameters*)Far16ToFlat(ioctl->ioctl.pvParm); 298 UnitInformationData *ui; 331 299 OS2AHCI_PASSTHROUGH pt; 332 R P_GENIOCTLtmp_ioctl;300 REQPACKET tmp_ioctl; 333 301 USHORT size = 0; 334 302 USHORT ret; … … 338 306 UCHAR unit; 339 307 340 /* verify addressability of parm buffer (DSKSP_CommandParameters) */341 if (DevHelp_VerifyAccess((SEL) ((ULONG) cp >> 16),342 sizeof(DSKSP_CommandParameters),343 (USHORT) (ULONG) cp,344 VERIFY_READONLY) != 0) {345 return(STDON | STERR | 0x05);346 }347 308 unit = cp->byPhysicalUnit; 348 309 349 310 /* verify addressability of data buffer (depends on function code) */ 350 switch (ioctl-> Function) {351 311 switch (ioctl->ioctl.bFunction) 312 { 352 313 case DSKSP_GEN_GET_COUNTERS: 353 314 size = sizeof(DeviceCountersData); … … 363 324 } 364 325 365 if (size > 0) { 366 if (DevHelp_VerifyAccess((SEL) ((ULONG) ioctl->DataPacket >> 16), 367 size, (USHORT) (ULONG) ioctl->DataPacket, 368 VERIFY_READWRITE) != 0) { 369 return(STDON | STERR | 0x05); 370 } 371 } 372 373 if (map_unit(unit, &a, &p, &d)) { 374 return(STDON | STERR | ERROR_I24_BAD_UNIT); 326 if (map_unit(unit, &a, &p, &d)) 327 { 328 return(RPDONE | RPERR | RPERR_UNIT); 375 329 } 376 330 377 331 /* execute generic disk request */ 378 switch (ioctl-> Function) {379 332 switch (ioctl->ioctl.bFunction) 333 { 380 334 case DSKSP_GEN_GET_COUNTERS: 381 335 /* Not supported, yet; we would need dynamically allocated device … … 383 337 * statistics buffer. For the time being, we'll return an empty buffer. 384 338 */ 385 memset( ioctl->DataPacket, 0x00, sizeof(DeviceCountersData));386 ret = STDON;339 memset(Far16ToFlat(ioctl->ioctl.pvData), 0x00, sizeof(DeviceCountersData)); 340 ret = RPDONE; 387 341 break; 388 342 … … 391 345 * even bother returning those. 392 346 */ 393 ui = (UnitInformationData _far *) ioctl->DataPacket;347 ui = (UnitInformationData*)Far16ToFlat(ioctl->ioctl.pvData); 394 348 memset(ui, 0x00, sizeof(*ui)); 395 349 … … 402 356 ui->wFlags |= UIF_SATA; 403 357 404 ret = STDON;358 ret = RPDONE; 405 359 break; 406 360 … … 408 362 /* return ATA ID buffer */ 409 363 memset(&tmp_ioctl, 0x00, sizeof(tmp_ioctl)); 410 tmp_ioctl. Category = OS2AHCI_IOCTL_CATEGORY;411 tmp_ioctl. Function = OS2AHCI_IOCTL_PASSTHROUGH;412 tmp_ioctl. ParmPacket = (void _far *) &pt;364 tmp_ioctl.ioctl.bCategory = OS2AHCI_IOCTL_CATEGORY; 365 tmp_ioctl.ioctl.bFunction = OS2AHCI_IOCTL_PASSTHROUGH; 366 tmp_ioctl.ioctl.pvParm = FlatToFar16(&pt); 413 367 414 368 memset(&pt, 0x00, sizeof(pt)); … … 420 374 ATA_CMD_ID_ATAPI : ATA_CMD_ID_ATA; 421 375 pt.buflen = size; 422 pt.buf = lin(ioctl->DataPacket);376 pt.buf = Far16ToFlat(ioctl->ioctl.pvData); 423 377 424 378 ret = gen_ioctl(&tmp_ioctl); … … 426 380 427 381 default: 428 ret = STDON | STATUS_ERR_UNKCMD;382 ret = RPDONE | RPERR_BADCOMMAND; 429 383 break; 430 384 } … … 437 391 * IBM1S506; the code has been more or less copied from DANIS506. 438 392 */ 439 USHORT ioctl_smart(R P_GENIOCTL _far*ioctl)440 { 441 DSKSP_CommandParameters _far *cp = (DSKSP_CommandParameters _far *) ioctl->ParmPacket;393 USHORT ioctl_smart(REQPACKET *ioctl) 394 { 395 DSKSP_CommandParameters *cp = (DSKSP_CommandParameters *)Far16ToFlat(ioctl->ioctl.pvParm); 442 396 USHORT size = 0; 443 397 USHORT ret; … … 445 399 UCHAR parm; 446 400 447 /* verify addressability of parm buffer (DSKSP_CommandParameters) */448 if (DevHelp_VerifyAccess((SEL) ((ULONG) cp >> 16),449 sizeof(DSKSP_CommandParameters),450 (USHORT) (ULONG) cp,451 VERIFY_READONLY) != 0) {452 return(STDON | STERR | 0x05);453 }454 401 unit = cp->byPhysicalUnit; 455 402 456 403 /* verify addressability of data buffer (depends on SMART function) */ 457 switch (ioctl->Function) { 404 switch (ioctl->ioctl.bFunction) 405 { 458 406 459 407 case DSKSP_SMART_GETSTATUS: … … 468 416 } 469 417 470 if (size > 0) { 471 if (DevHelp_VerifyAccess((SEL) ((ULONG) ioctl->DataPacket >> 16), 472 size, (USHORT) (ULONG) ioctl->DataPacket, 473 VERIFY_READWRITE) != 0) { 474 return(STDON | STERR | 0x05); 475 } 476 parm = ioctl->DataPacket[0]; 418 if (size > 0) 419 { 420 parm = *(UCHAR*)Far16ToFlat(ioctl->ioctl.pvData); 477 421 } 478 422 479 423 /* execute SMART request */ 480 switch (ioctl-> Function) {481 424 switch (ioctl->ioctl.bFunction) 425 { 482 426 case DSKSP_SMART_ONOFF: 483 427 ret = do_smart(unit, (BYTE) ((parm) ? ATA_SMART_ENABLE : ATA_SMART_DISABLE), 0, 0, NULL); … … 501 445 502 446 case DSKSP_SMART_GETSTATUS: 503 ret = do_smart(unit, ATA_SMART_STATUS, 0, 0, ioctl->DataPacket);447 ret = do_smart(unit, ATA_SMART_STATUS, 0, 0, Far16ToFlat(ioctl->ioctl.pvData)); 504 448 break; 505 449 506 450 case DSKSP_SMART_GET_ATTRIBUTES: 507 ret = do_smart(unit, ATA_SMART_READ_VALUES, 0, 0, ioctl->DataPacket);451 ret = do_smart(unit, ATA_SMART_READ_VALUES, 0, 0, Far16ToFlat(ioctl->ioctl.pvData)); 508 452 break; 509 453 510 454 case DSKSP_SMART_GET_THRESHOLDS: 511 ret = do_smart(unit, ATA_SMART_READ_THRESHOLDS, 0, 0, ioctl->DataPacket);455 ret = do_smart(unit, ATA_SMART_READ_THRESHOLDS, 0, 0, Far16ToFlat(ioctl->ioctl.pvData)); 512 456 break; 513 457 514 458 case DSKSP_SMART_GET_LOG: 515 ret = do_smart(unit, ATA_SMART_READ_LOG, 1, parm, ioctl->DataPacket);459 ret = do_smart(unit, ATA_SMART_READ_LOG, 1, parm, Far16ToFlat(ioctl->ioctl.pvData)); 516 460 break; 517 461 518 462 default: 519 ret = STDON | STATUS_ERR_UNKCMD;463 ret = RPDONE | RPERR_BADCOMMAND; 520 464 } 521 465 … … 526 470 * Perform SMART request. The code has been more or less copied from DANIS506. 527 471 */ 528 static USHORT do_smart(BYTE unit, BYTE sub_func, BYTE cnt, BYTE lba_l, void _far*buf)472 static USHORT do_smart(BYTE unit, BYTE sub_func, BYTE cnt, BYTE lba_l, void *buf) 529 473 { 530 474 OS2AHCI_PASSTHROUGH pt; 531 R P_GENIOCTLioctl;475 REQPACKET ioctl; 532 476 USHORT ret; 533 477 USHORT a; … … 535 479 USHORT d; 536 480 537 if (map_unit(unit, &a, &p, &d)) { 538 return(STDON | STERR | ERROR_I24_BAD_UNIT); 481 if (map_unit(unit, &a, &p, &d)) 482 { 483 return(RPDONE | RPERR_UNIT); 539 484 } 540 485 … … 543 488 */ 544 489 memset(&ioctl, 0x00, sizeof(ioctl)); 545 ioctl. Category = OS2AHCI_IOCTL_CATEGORY;546 ioctl. Function = OS2AHCI_IOCTL_PASSTHROUGH;547 ioctl. ParmPacket = (void _far *) &pt;490 ioctl.ioctl.bCategory = OS2AHCI_IOCTL_CATEGORY; 491 ioctl.ioctl.bFunction = OS2AHCI_IOCTL_PASSTHROUGH; 492 ioctl.ioctl.pvParm = FlatToFar16(&pt); 548 493 549 494 memset(&pt, 0x00, sizeof(pt)); … … 557 502 pt.cmd.ata.cmd = ATA_CMD_SMART; 558 503 559 if (buf != NULL && sub_func != ATA_SMART_STATUS) { 504 if (buf != NULL && sub_func != ATA_SMART_STATUS) 505 { 560 506 pt.buflen = 512; 561 pt.buf = lin(buf);562 } 563 564 if (((ret = gen_ioctl(&ioctl)) & STERR) == 0 && sub_func == ATA_SMART_STATUS) {565 507 pt.buf = buf; 508 } 509 510 if (((ret = gen_ioctl(&ioctl)) & RPERR) == 0 && sub_func == ATA_SMART_STATUS) 511 { 566 512 /* ATA_SMART_STATUS doesn't transfer anything but instead relies on the 567 513 * returned D2H FIS, mapped to the ATA CMD, to have a certain value … … 569 515 * the data buffer. 570 516 */ 571 if (((pt.cmd.ata.lba_l >> 8) & 0xffff) == 0xf42c) { 572 *((ULONG _far *) buf) = 1; 517 if (((pt.cmd.ata.lba_l >> 8) & 0xffff) == 0xf42c) 518 { 519 *((ULONG *) buf) = 1; 573 520 } else { 574 *((ULONG _far*) buf) = 0;521 *((ULONG *) buf) = 0; 575 522 } 576 523 } … … 585 532 * ATA/ATAPI units sequentially. 586 533 */ 587 static int map_unit(BYTE unit, USHORT _far *a, USHORT _far *p, USHORT _far*d)534 static int map_unit(BYTE unit, USHORT *a, USHORT *p, USHORT *d) 588 535 { 589 536 USHORT _a; … … 592 539 593 540 /* map unit to adapter/port/device */ 594 for (_a = 0; _a < ad_info_cnt; _a++) { 541 for (_a = 0; _a < ad_info_cnt; _a++) 542 { 595 543 AD_INFO *ai = ad_infos + _a; 596 544 597 for (_p = 0; _p <= ai->port_max; _p++) { 545 for (_p = 0; _p <= ai->port_max; _p++) 546 { 598 547 P_INFO *pi = ai->ports + _p; 599 548 600 for (_d = 0; _d <= pi->dev_max; _d++) { 601 if (pi->devs[_d].present) { 602 if (unit-- == 0) { 549 for (_d = 0; _d <= pi->dev_max; _d++) 550 { 551 if (pi->devs[_d].present) 552 { 553 if (unit-- == 0) 554 { 603 555 /* found the device */ 604 556 *a = _a; … … 617 569 618 570 /****************************************************************************** 619 * Get linear address for specified virtual address.620 */621 static LIN lin(void _far *p)622 {623 LIN l;624 625 if (DevHelp_VirtToLin((SEL) ((ULONG) p >> 16), (USHORT) (ULONG) p, &l) != 0) {626 return(0);627 }628 629 return(l);630 }631 632 /******************************************************************************633 571 * IORB notification routine; used to wake up the sleeping application thread 634 572 * when the IOCTL IORB is complete. 635 573 */ 636 IORBH _far * _far _cdecl ioctl_wakeup(IORBH _far *iorb) 637 { 638 USHORT awake_count; 639 640 DevHelp_ProcRun((ULONG) iorb, &awake_count); 641 574 IORBH *IoctlWakeup(ULONG ulArg) 575 { 576 KernWakeup(ulArg, 0, NULL, 0); 642 577 return(NULL); 643 578 } -
trunk/src/os2ahci/ioctl.h
r129 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-2016 David Azarewicz 6 7 * 7 8 * Authors: Christian Mueller, Markus Thielen … … 137 138 138 139 ULONG buflen; /* length of buffer for data transfers */ 139 LIN buf;/* buffer for data transfers (32-bit linear address) */140 void *buf; /* buffer for data transfers (32-bit linear address) */ 140 141 USHORT sense_len; /* length of sense data in IOCTL DataPacket */ 141 142 } OS2AHCI_PASSTHROUGH; -
trunk/src/os2ahci/os2ahci.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-201 5David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 29 29 #include "ioctl.h" 30 30 #include "version.h" 31 #include "devhdr.h" 31 32 32 33 /* -------------------------- macros and constants ------------------------- */ 33 34 /* parse integer command line parameter */35 #define drv_parm_int(s, value, type, radix) \36 { \37 char _far *_ep; \38 if ((s)[1] != ':') { \39 cprintf("%s: missing colon (:) after /%c\n", drv_name, *(s)); \40 goto init_fail; \41 } \42 value = (type) strtol((s) + 2, \43 (const char _far* _far*) &_ep, \44 radix); \45 s = _ep; \46 }47 48 #define drv_parm_int_optional(s, value, type, radix) \49 { \50 char _far *_ep; \51 if ((s)[1] == ':') { \52 value = (type) strtol((s) + 2, (const char _far* _far*) &_ep, radix); \53 s = _ep; \54 } else { \55 value++; \56 } \57 }58 34 59 35 /* set two-dimensional array of port options */ … … 70 46 } 71 47 72 /* constants for undefined kernel exit routine;73 * see register_krnl_exit() func */74 #define DevHlp_RegisterKrnlExit 0x006f75 76 48 #define FLAG_KRNL_EXIT_ADD 0x1000 77 49 #define FLAG_KRNL_EXIT_REMOVE 0x2000 … … 87 59 /* -------------------------- function prototypes -------------------------- */ 88 60 89 void _cdecl small_code_ (void); 90 91 static int add_unit_info (IORB_CONFIGURATION _far *iorb_conf, int dt_ai, 92 int a, int p, int d, int scsi_id); 93 94 static void register_krnl_exit (void); 61 extern int SetPsdPutc(void); 62 static int add_unit_info(IORB_CONFIGURATION *iorb_conf, int dt_ai, int a, int p, int d, int scsi_id); 95 63 96 64 /* ------------------------ global/static variables ------------------------ */ 97 98 int debug = 0; /* if > 0, print debug messages to COM1 */ 99 int thorough_scan = 1; /* if != 0, perform thorough PCI scan */ 100 int init_reset = 1; /* if != 0, reset ports during init */ 101 int force_write_cache; /* if != 0, force write cache */ 102 int verbosity = 0; /* default is quiet. 1=show sign on banner, >1=show adapter info during boot */ 103 int use_lvm_info = 1; 104 int wrap_trace_buffer = 0; 105 long com_baud = 0; 106 107 PFN Device_Help = 0; /* pointer to device helper entry point */ 108 ULONG RMFlags = 0; /* required by resource manager library */ 109 PFN RM_Help0 = NULL; /* required by resource manager library */ 110 PFN RM_Help3 = NULL; /* required by resource manager library */ 111 HDRIVER rm_drvh; /* resource manager driver handle */ 112 char rm_drvname[80]; /* driver name as returned by RM */ 113 USHORT add_handle; /* driver handle (RegisterDeviceClass) */ 114 UCHAR timer_pool[TIMER_POOL_SIZE]; /* timer pool */ 115 char drv_name[] = "OS2AHCI"; /* driver name as string */ 65 int thorough_scan = 1; /* if != 0, perform thorough PCI scan */ 66 int init_reset = 1; /* if != 0, reset ports during init */ 67 int force_write_cache; /* if != 0, force write cache */ 68 int verbosity = 0; /* default is quiet. 1=show sign on banner, >1=show adapter info during boot */ 69 int use_lvm_info = 1; 70 long com_baud = 0; 71 72 HDRIVER rm_drvh; /* resource manager driver handle */ 73 USHORT add_handle; /* driver handle (RegisterDeviceClass) */ 74 char drv_name[] = "OS2AHCI"; /* driver name as string */ 116 75 117 76 /* resource manager driver information structure */ 118 DRIVERSTRUCT rm_drvinfo = { 119 drv_name, /* driver name */ 120 "AHCI SATA Driver", /* driver description */ 121 DVENDOR, /* vendor name */ 122 DMAJOR, /* RM interface version major */ 123 DMINOR, /* RM interface version minor */ 124 BLD_YEAR, BLD_MONTH, BLD_DAY, /* date */ 125 0, /* driver flags */ 126 DRT_ADDDM, /* driver type */ 127 DRS_ADD, /* driver sub type */ 128 NULL /* driver callback */ 77 static DRIVERSTRUCT rm_drvinfo = 78 { 79 NULL, /* We cannot do Flat to Far16 conversion at compile time */ 80 NULL, /* so we put NULLs in all the Far16 fields and then fill */ 81 NULL, /* them in at run time */ 82 DMAJOR, 83 DMINOR, 84 BLD_YEAR, BLD_MONTH, BLD_DAY, 85 0, 86 DRT_ADDDM, 87 DRS_ADD, 88 NULL 129 89 }; 130 90 131 ULONGdrv_lock; /* driver-level spinlock */132 IORB_QUEUE 133 AD_INFO 134 int 135 u16 136 int 137 int 138 int 91 SpinLock_t drv_lock; /* driver-level spinlock */ 92 IORB_QUEUE driver_queue; /* driver-level IORB queue */ 93 AD_INFO ad_infos[MAX_AD]; /* adapter information list */ 94 int ad_info_cnt; /* number of entries in ad_infos[] */ 95 u16 ad_ignore; /* bitmap with adapter indexes to ignore */ 96 int init_complete; /* if != 0, initialization has completed */ 97 int suspended; 98 int resume_sleep_flag; 139 99 140 100 /* apapter/port-specific options saved when parsing the command line */ 141 u8 emulate_scsi[MAX_AD][AHCI_MAX_PORTS]; 142 u8 enable_ncq[MAX_AD][AHCI_MAX_PORTS]; 143 u8 link_speed[MAX_AD][AHCI_MAX_PORTS]; 144 u8 link_power[MAX_AD][AHCI_MAX_PORTS]; 145 u8 track_size[MAX_AD][AHCI_MAX_PORTS]; 146 u8 port_ignore[MAX_AD][AHCI_MAX_PORTS]; 147 148 static char init_msg[] = "%s driver version %d.%02d\n"; 149 static char exit_msg[] = "%s driver *not* installed\n"; 101 u8 emulate_scsi[MAX_AD][AHCI_MAX_PORTS]; 102 u8 enable_ncq[MAX_AD][AHCI_MAX_PORTS]; 103 u8 link_speed[MAX_AD][AHCI_MAX_PORTS]; 104 u8 link_power[MAX_AD][AHCI_MAX_PORTS]; 105 u8 track_size[MAX_AD][AHCI_MAX_PORTS]; 106 u8 port_ignore[MAX_AD][AHCI_MAX_PORTS]; 107 150 108 char BldLevel[] = BLDLEVEL; 151 109 … … 158 116 * packet for IDC calls, so they can be handled by gen_ioctl. 159 117 */ 160 USHORT _cdecl c_strat(RPH _far *req)118 void StrategyHandler(REQPACKET *prp) 161 119 { 162 120 u16 rc; 163 121 164 switch (req->Cmd) { 165 166 case CMDInitBase: 167 rc = init_drv((RPINITIN _far *) req); 168 break; 169 170 case CMDShutdown: 171 rc = exit_drv(((RPSAVERESTORE _far *) req)->FuncCode); 172 break; 173 174 case CMDGenIOCTL: 175 rc = gen_ioctl((RP_GENIOCTL _far *) req); 176 break; 177 178 case CMDOpen: 179 build_user_info(1); 180 rc = STDON; 181 break; 182 183 case CMDINPUT: 184 rc = char_dev_input((RP_RWV _far *) req); 185 break; 186 187 case CMDSaveRestore: 188 rc = sr_drv(((RPSAVERESTORE _far *) req)->FuncCode); 189 break; 190 191 case CMDClose: 192 case CMDInputS: 193 case CMDInputF: 122 switch (prp->bCommand) 123 { 124 case STRATEGY_BASEDEVINIT: 125 rc = init_drv(prp); 126 break; 127 128 case STRATEGY_SHUTDOWN: 129 rc = exit_drv(prp->save_restore.Function); 130 break; 131 132 case STRATEGY_GENIOCTL: 133 rc = gen_ioctl(prp); 134 break; 135 136 case STRATEGY_OPEN: 137 build_user_info(); 138 rc = RPDONE; 139 break; 140 141 case STRATEGY_READ: 142 rc = char_dev_input(prp); 143 break; 144 145 case STRATEGY_SAVERESTORE: 146 rc = sr_drv(prp->save_restore.Function); 147 break; 148 149 case STRATEGY_INITCOMPLETE: 150 case STRATEGY_CLOSE: 151 case STRATEGY_INPUTSTATUS: 152 case STRATEGY_FLUSHINPUT: 194 153 /* noop */ 195 rc = STDON;154 rc = RPDONE; 196 155 break; 197 156 198 157 default: 199 rc = STDON | STATUS_ERR_UNKCMD; 200 break; 201 } 202 203 return(rc); 158 rc = RPDONE | RPERR_BADCOMMAND; 159 break; 160 } 161 162 prp->usStatus = rc; 163 } 164 165 void IdcHandler(REQPACKET *prp) 166 { 167 StrategyHandler(prp); 204 168 } 205 169 … … 208 172 * the PCI bus for supported AHCI adapters, etc. 209 173 */ 210 USHORT init_drv(R PINITIN _far*req)174 USHORT init_drv(REQPACKET *req) 211 175 { 212 176 static int init_drv_called; 213 177 static int init_drv_failed; 214 RPINITOUT _far *rsp = (RPINITOUT _far *) req;215 DDD_PARM_LIST _far *ddd_pl = (DDD_PARM_LIST _far *) req->InitArgs;216 178 APIRET rmrc; 217 char _far *cmd_line; 218 char _far *s; 179 const char *pszCmdLine, *cmd_line; 219 180 int adapter_index = -1; 220 181 int port_index = -1; 221 int invert_option; 222 int optval; 223 u16 vendor; 224 u16 device; 225 226 if (init_drv_called) { 227 /* This is the init call for the second (legacy IBMS506$) character 182 int iInvertOption; 183 int iStatus; 184 185 if (init_drv_called) 186 { 187 /* This is the init call for the second (IBMS506$) character 228 188 * device driver. If the main driver failed initialization, fail this 229 189 * one as well. 230 190 */ 231 rsp->CodeEnd = (u16) end_of_code; 232 rsp->DataEnd = (u16) &end_of_data; 233 return(STDON | ((init_drv_failed) ? ERROR_I24_QUIET_INIT_FAIL : 0)); 234 } 191 return(RPDONE | ((init_drv_failed) ? RPERR_INITFAIL : 0)); 192 } 193 D32g_DbgLevel = 0; 235 194 init_drv_called = 1; 236 195 suspended = 0; … … 238 197 memset(ad_infos, 0, sizeof(ad_infos)); 239 198 memset(emulate_scsi, 1, sizeof(emulate_scsi)); /* set default enabled */ 240 241 /* set device helper entry point */ 242 Device_Help = req->DevHlpEP; 199 UtSetDriverName("OS2AHCI$"); 200 Header.ulCaps |= DEV_ADAPTER_DD; /* DAZ This flag is not really needed. */ 243 201 244 202 /* create driver-level spinlock */ 245 DevHelp_CreateSpinLock(&drv_lock); 246 247 /* initialize libc code */ 248 init_libc(); 203 KernAllocSpinLock(&drv_lock); 249 204 250 205 /* register driver with resource manager */ 251 if ((rmrc = RMCreateDriver(&rm_drvinfo, &rm_drvh)) != RMRC_SUCCESS) { 252 cprintf("%s: failed to register driver with resource manager (rc = %d)\n", 253 drv_name, rmrc); 206 rm_drvinfo.DrvrName = drv_name; 207 rm_drvinfo.DrvrDescript = "AHCI SATA Driver"; 208 rm_drvinfo.VendorName = DVENDOR; 209 if ((rmrc = RMCreateDriver(&rm_drvinfo, &rm_drvh)) != RMRC_SUCCESS) 210 { 211 iprintf("%s: failed to register driver with resource manager (rc = %d)", drv_name, rmrc); 254 212 goto init_fail; 255 213 } 256 214 257 /* parse command line parameters */ 258 cmd_line = (char _far *) ((u32) ddd_pl & 0xffff0000l) + ddd_pl->cmd_line_args; 259 260 for (s = cmd_line; *s != 0; s++) { 261 if (*s == '/') { 262 if ((invert_option = (s[1] == '!')) != 0) { 263 s++; 264 } 265 s++; 266 switch (tolower(*s)) { 267 268 case '\0': 269 /* end of command line; can only happen if command line is incorrect */ 270 cprintf("%s: incomplete command line option\n", drv_name); 271 goto init_fail; 272 273 case 'b': 274 drv_parm_int(s, com_baud, u32, 10); 275 break; 276 277 case 'c': 278 /* set COM port base address for debug messages */ 279 drv_parm_int(s, com_base, u16, 16); 280 if (com_base == 1) com_base = 0x3f8; 281 if (com_base == 2) com_base = 0x2f8; 282 break; 283 284 case 'd': 285 /* increase debug level */ 286 drv_parm_int_optional(s, debug, int, 10); 287 break; 288 289 case 'g': 290 /* add specfied PCI ID as a supported generic AHCI adapter */ 291 drv_parm_int(s, vendor, u16, 16); 292 s--; 293 drv_parm_int(s, device, u16, 16); 294 if (add_pci_id(vendor, device)) { 295 cprintf("%s: failed to add PCI ID %04x:%04x\n", drv_name, vendor, device); 296 goto init_fail; 297 } 298 thorough_scan = 1; 299 break; 300 301 case 't': 302 /* perform thorough PCI scan (i.e. look for individual supported PCI IDs) */ 303 thorough_scan = !invert_option; 304 break; 305 306 case 'r': 307 /* reset ports during initialization */ 308 init_reset = !invert_option; 309 break; 310 311 case 'f': 312 /* force write cache regardless of IORB flags */ 313 force_write_cache = 1; 314 break; 315 316 case 'a': 317 /* set adapter index for adapter and port-related options */ 318 drv_parm_int(s, adapter_index, int, 10); 319 if (adapter_index < 0 || adapter_index >= MAX_AD) { 320 cprintf("%s: invalid adapter index (%d)\n", drv_name, adapter_index); 321 goto init_fail; 322 } 323 break; 324 325 case 'p': 326 /* set port index for port-related options */ 327 drv_parm_int(s, port_index, int, 10); 328 if (port_index < 0 || port_index >= AHCI_MAX_PORTS) { 329 cprintf("%s: invalid port index (%d)\n", drv_name, port_index); 330 goto init_fail; 331 } 332 break; 333 334 case 'i': 335 /* ignore current adapter index */ 336 if (adapter_index >= 0) { 337 if (port_index >= 0) port_ignore[adapter_index][port_index] = !invert_option; 338 else ad_ignore |= 1U << adapter_index; 339 } 340 break; 341 342 case 's': 343 /* enable SCSI emulation for ATAPI devices */ 344 set_port_option(emulate_scsi, !invert_option); 345 break; 346 347 case 'n': 348 /* enable NCQ */ 349 set_port_option(enable_ncq, !invert_option); 350 break; 351 352 case 'l': 353 /* set link speed or power savings */ 354 s++; 355 switch (tolower(*s)) { 356 case 's': 357 /* set link speed */ 358 drv_parm_int(s, optval, int, 10); 359 set_port_option(link_speed, optval); 360 break; 361 case 'p': 362 /* set power management */ 363 drv_parm_int(s, optval, int, 10); 364 set_port_option(link_power, optval); 365 break; 366 default: 367 cprintf("%s: invalid link parameter (%c)\n", drv_name, *s); 368 goto init_fail; 369 } 370 /* need to reset the port in order to establish link settings */ 371 init_reset = 1; 372 break; 373 374 case '4': 375 /* enable 4K sector geometry enhancement (track size = 56) */ 376 if (!invert_option) { 377 set_port_option(track_size, 56); 378 } 379 break; 380 381 case 'z': 382 /* Specify to not use the LVM information. There is no reason why anyone would 383 * want to do this, but previous versions of this driver did not have LVM capability, 384 * so this switch is here temporarily just in case. 385 */ 386 use_lvm_info = !invert_option; 387 break; 388 389 case 'v': 390 /* be verbose during boot */ 391 drv_parm_int_optional(s, verbosity, int, 10); 392 break; 393 394 case 'w': 395 /* Specify to allow the trace buffer to wrap when full. */ 396 wrap_trace_buffer = !invert_option; 397 break; 398 399 case 'q': 400 /* Temporarily output a non-fatal message to get anyone using this 401 * undocumented switch to stop using it. This will be removed soon 402 * and the error will become fatal. 403 */ 404 cprintf("%s: unknown option: /%c\n", drv_name, *s); 405 break; 406 407 default: 408 cprintf("%s: unknown option: /%c\n", drv_name, *s); 409 goto init_fail; 410 } 411 } 412 } 413 414 if (com_baud) init_com(com_baud); /* initialize com port for debug output */ 415 416 /* initialize trace buffer if applicable */ 417 if (debug > 0 && com_base == 0) { 418 /* debug is on, but COM port is off -> use our trace buffer */ 419 trace_init(AHCI_DEBUG_BUF_SIZE); 420 } else { 421 trace_init(AHCI_INFO_BUF_SIZE); 422 } 423 424 ntprintf("BldLevel: %s\n", BldLevel); 425 ntprintf("CmdLine: %Fs\n", cmd_line); 215 pszCmdLine = cmd_line = req->init_in.szArgs; 216 iStatus = 0; 217 while (*pszCmdLine) 218 { 219 if (*pszCmdLine++ != '/') continue; /* Ignore anything that doesn't start with '/' */ 220 /* pszCmdLine now points to first char of argument */ 221 222 if ((iInvertOption = (*pszCmdLine == '!')) != 0) pszCmdLine++; 223 224 if (ArgCmp(pszCmdLine, "B:")) 225 { 226 pszCmdLine += 2; 227 com_baud = strtol(pszCmdLine, &pszCmdLine, 0); 228 continue; 229 } 230 231 if (ArgCmp(pszCmdLine, "C:")) 232 { 233 pszCmdLine += 2; 234 /* set COM port base address for debug messages */ 235 D32g_ComBase = strtol(pszCmdLine, &pszCmdLine, 0); 236 if (D32g_ComBase == 1) D32g_ComBase = 0x3f8; 237 if (D32g_ComBase == 2) D32g_ComBase = 0x2f8; 238 continue; 239 } 240 241 if (ArgCmp(pszCmdLine, "D")) 242 { 243 pszCmdLine++; 244 if (*pszCmdLine == ':') 245 { 246 pszCmdLine++; 247 D32g_DbgLevel = strtol(pszCmdLine, &pszCmdLine, 0); 248 } 249 else D32g_DbgLevel++; /* increase debug level */ 250 continue; 251 } 252 253 if (ArgCmp(pszCmdLine, "G:")) 254 { 255 u16 usVendor; 256 u16 usDevice; 257 258 pszCmdLine += 2; 259 /* add specfied PCI ID as a supported generic AHCI adapter */ 260 usVendor = strtol(pszCmdLine, &pszCmdLine, 16); 261 if (*pszCmdLine != ':') break; 262 pszCmdLine++; 263 usDevice = strtol(pszCmdLine, &pszCmdLine, 16); 264 if (add_pci_id(usVendor, usDevice)) 265 { 266 iprintf("%s: failed to add PCI ID %04x:%04x", drv_name, usVendor, usDevice); 267 iStatus = 1; 268 } 269 thorough_scan = 1; 270 continue; 271 } 272 273 if (ArgCmp(pszCmdLine, "T")) 274 { 275 pszCmdLine++; 276 /* perform thorough PCI scan (i.e. look for individual supported PCI IDs) */ 277 thorough_scan = !iInvertOption; 278 continue; 279 } 280 281 if (ArgCmp(pszCmdLine, "R")) 282 { 283 pszCmdLine++; 284 /* reset ports during initialization */ 285 init_reset = !iInvertOption; 286 continue; 287 } 288 289 if (ArgCmp(pszCmdLine, "F")) 290 { 291 pszCmdLine++; 292 /* force write cache regardless of IORB flags */ 293 force_write_cache = 1; 294 continue; 295 } 296 297 if (ArgCmp(pszCmdLine, "A:")) 298 { 299 pszCmdLine += 2; 300 /* set adapter index for adapter and port-related options */ 301 adapter_index = strtol(pszCmdLine, &pszCmdLine, 0); 302 if (adapter_index < 0 || adapter_index >= MAX_AD) 303 { 304 iprintf("%s: invalid adapter index (%d)", drv_name, adapter_index); 305 iStatus = 1; 306 } 307 continue; 308 } 309 310 if (ArgCmp(pszCmdLine, "P:")) 311 { 312 pszCmdLine += 2; 313 /* set port index for port-related options */ 314 port_index = strtol(pszCmdLine, &pszCmdLine, 0); 315 if (port_index < 0 || port_index >= AHCI_MAX_PORTS) 316 { 317 iprintf("%s: invalid port index (%d)", drv_name, port_index); 318 iStatus = 1; 319 } 320 continue; 321 } 322 323 if (ArgCmp(pszCmdLine, "I")) 324 { 325 pszCmdLine++; 326 /* ignore current adapter index */ 327 if (adapter_index >= 0) 328 { 329 if (port_index >= 0) port_ignore[adapter_index][port_index] = !iInvertOption; 330 else ad_ignore |= 1U << adapter_index; 331 } 332 continue; 333 } 334 335 if (ArgCmp(pszCmdLine, "S")) 336 { 337 pszCmdLine++; 338 /* enable SCSI emulation for ATAPI devices */ 339 set_port_option(emulate_scsi, !iInvertOption); 340 continue; 341 } 342 343 if (ArgCmp(pszCmdLine, "N")) 344 { 345 pszCmdLine++; 346 /* enable NCQ */ 347 set_port_option(enable_ncq, !iInvertOption); 348 continue; 349 } 350 351 if (ArgCmp(pszCmdLine, "LS:")) 352 { 353 int optval; 354 355 pszCmdLine += 3; 356 /* set link speed */ 357 optval = strtol(pszCmdLine, &pszCmdLine, 0); 358 set_port_option(link_speed, optval); 359 /* need to reset the port in order to establish link settings */ 360 init_reset = 1; 361 continue; 362 } 363 364 if (ArgCmp(pszCmdLine, "LP:")) 365 { 366 int optval; 367 368 pszCmdLine += 3; 369 /* set power management */ 370 optval = strtol(pszCmdLine, &pszCmdLine, 0); 371 set_port_option(link_power, optval); 372 /* need to reset the port in order to establish link settings */ 373 init_reset = 1; 374 continue; 375 } 376 377 if (ArgCmp(pszCmdLine, "4")) 378 { 379 pszCmdLine++; 380 /* enable 4K sector geometry enhancement (track size = 56) */ 381 if (!iInvertOption) set_port_option(track_size, 56); 382 continue; 383 } 384 385 if (ArgCmp(pszCmdLine, "Z")) 386 { 387 pszCmdLine++; 388 /* Specify to not use the LVM information. There is no reason why anyone would 389 * want to do this, but previous versions of this driver did not have LVM capability, 390 * so this switch is here temporarily just in case. 391 */ 392 use_lvm_info = !iInvertOption; 393 continue; 394 } 395 396 if (ArgCmp(pszCmdLine, "V")) 397 { 398 pszCmdLine++; 399 if (*pszCmdLine == ':') 400 { 401 pszCmdLine++; 402 verbosity = strtol(pszCmdLine, &pszCmdLine, 0); 403 } 404 else verbosity++; /* increase verbosity level */ 405 continue; 406 } 407 408 if (ArgCmp(pszCmdLine, "W")) 409 { 410 pszCmdLine++; 411 /* Specify to allow the trace buffer to wrap when full. */ 412 D32g_DbgBufWrap = !iInvertOption; 413 continue; 414 } 415 416 iprintf("Unrecognized switch: %s", pszCmdLine-1); 417 iStatus = 1; /* unrecognized argument */ 418 } 419 420 if (iStatus) goto init_fail; 421 422 if (com_baud) InitComPort(com_baud); 423 424 NTPRINTF("BldLevel: %s\n", BldLevel); 425 NTPRINTF("CmdLine: %s\n", cmd_line); 426 /* 427 if (sizeof(ADD_WORKSPACE) > ADD_WORKSPACE_SIZE) 428 { 429 dprintf(0,"ADD_WORKSPACE size is too big! %d>16\n", sizeof(ADD_WORKSPACE)); 430 goto init_fail; 431 } 432 */ 426 433 427 434 /* print initialization message */ 428 ciprintf( init_msg, drv_name, DMAJOR, DMINOR);435 ciprintf("%s driver version %d.%02d", drv_name, DMAJOR, DMINOR); 429 436 430 437 #ifdef TESTVER … … 435 442 scan_pci_bus(); 436 443 437 if (ad_info_cnt > 0) { 444 if (ad_info_cnt > 0) 445 { 438 446 /* initialization succeeded and we found at least one AHCI adapter */ 439 ADD_InitTimer(timer_pool, sizeof(timer_pool)); 440 441 if (DevHelp_RegisterDeviceClass(drv_name, (PFN) add_entry, 0, 1, &add_handle)){442 cprintf("%s: couldn't register device class\n", drv_name);447 448 if (Dev32Help_RegisterDeviceClass(drv_name, add_entry, 0, 1, &add_handle)) 449 { 450 iprintf("%s: couldn't register device class", drv_name); 443 451 goto init_fail; 444 452 } 445 453 454 Timer_InitTimer(TIMER_COUNT); 455 446 456 /* allocate context hooks */ 447 if (DevHelp_AllocateCtxHook(mk_NPFN(restart_hook), &restart_ctxhook_h) != 0 || 448 DevHelp_AllocateCtxHook(mk_NPFN(reset_hook), &reset_ctxhook_h) != 0 || 449 DevHelp_AllocateCtxHook(mk_NPFN(engine_hook), &engine_ctxhook_h)) { 450 cprintf("%s: failed to allocate task-time context hooks\n", drv_name); 451 goto init_fail; 452 } 453 454 rsp->CodeEnd = (u16) end_of_code; 455 rsp->DataEnd = (u16) &end_of_data; 457 KernAllocateContextHook(restart_ctxhook, 0, &restart_ctxhook_h); 458 KernAllocateContextHook(reset_ctxhook, 0, &reset_ctxhook_h); 459 KernAllocateContextHook(engine_ctxhook, 0, &engine_ctxhook_h); 456 460 457 461 /* register kernel exit routine for trap dumps */ 458 register_krnl_exit(); 459 460 return(STDON); 461 462 } else { 462 Dev32Help_RegisterKrnlExit(shutdown_driver, FLAG_KRNL_EXIT_ADD, TYPE_KRNL_EXIT_INT13); 463 464 return(RPDONE); 465 466 } 467 else 468 { 463 469 /* no adapters found */ 464 ciprintf(" No adapters found.\n");470 ciprintf("%s: No adapters found.", drv_name); 465 471 } 466 472 467 473 init_fail: 468 474 /* initialization failed; set segment sizes to 0 and return error */ 469 rsp->CodeEnd = 0;470 rsp->DataEnd = 0;471 475 init_drv_failed = 1; 472 476 473 /* free context hooks */ 474 if (engine_ctxhook_h != 0) DevHelp_FreeCtxHook(engine_ctxhook_h); 475 if (reset_ctxhook_h != 0) DevHelp_FreeCtxHook(reset_ctxhook_h); 476 if (restart_ctxhook_h != 0) DevHelp_FreeCtxHook(restart_ctxhook_h); 477 478 if (rm_drvh != 0) { 477 if (rm_drvh != 0) 478 { 479 479 /* remove driver from resource manager */ 480 480 RMDestroyDriver(rm_drvh); 481 481 } 482 482 483 ciprintf( exit_msg, drv_name);484 return( STDON | ERROR_I24_QUIET_INIT_FAIL);483 ciprintf("%s driver *not* installed", drv_name); 484 return(RPDONE | RPERR_INITFAIL); 485 485 } 486 486 … … 491 491 * commands for ATA disks) are implemented here. 492 492 */ 493 USHORT gen_ioctl(R P_GENIOCTL _far*ioctl)494 { 495 dprintf("IOCTL 0x%x/0x%x\n", (u16) ioctl->Category, (u16) ioctl->Function);496 497 switch (ioctl-> Category) {498 493 USHORT gen_ioctl(REQPACKET *ioctl) 494 { 495 DPRINTF(2,"IOCTL 0x%x/0x%x\n", ioctl->ioctl.bCategory, ioctl->ioctl.bFunction); 496 497 switch (ioctl->ioctl.bCategory) 498 { 499 499 case OS2AHCI_IOCTL_CATEGORY: 500 switch (ioctl->Function) { 500 switch (ioctl->ioctl.bFunction) 501 { 501 502 502 503 case OS2AHCI_IOCTL_GET_DEVLIST: … … 514 515 case DSKSP_CAT_SMART: 515 516 return(ioctl_smart(ioctl)); 516 517 } 518 519 return(STDON | STATUS_ERR_UNKCMD); 517 } 518 519 return(RPDONE | RPERR_BADCOMMAND); 520 520 } 521 521 … … 525 525 * dump similar to IBM1S506.ADD/DANIS506.ADD (TODO). 526 526 */ 527 USHORT char_dev_input(RP_RWV _far *rwrb) 528 { 529 return(trace_char_dev(rwrb)); 527 USHORT char_dev_input(REQPACKET *pPacket) 528 { 529 void *LinAdr; 530 531 if (Dev32Help_PhysToLin(pPacket->io.ulAddress, pPacket->io.usCount, &LinAdr)) 532 { 533 pPacket->io.usCount = 0; 534 return RPDONE | RPERR_GENERAL; 535 } 536 537 pPacket->io.usCount = dCopyToUser(LinAdr, pPacket->io.usCount); 538 539 return RPDONE; 530 540 } 531 541 … … 542 552 USHORT exit_drv(int func) 543 553 { 544 dprintf("exit_drv(%d) called\n", func); 545 546 if (func == 0) { 554 DPRINTF(2,"exit_drv(%d) called\n", func); 555 556 if (func == 0) 557 { 547 558 /* we're only interested in the second phase of the shutdown */ 548 return( STDON);559 return(RPDONE); 549 560 } 550 561 551 562 suspend(); 552 return( STDON);563 return(RPDONE); 553 564 } 554 565 … … 559 570 USHORT sr_drv(int func) 560 571 { 561 dprintf("sr_drv(%d) called\n", func);572 DPRINTF(2,"sr_drv(%d) called\n", func); 562 573 563 574 if (func) resume(); 564 575 else suspend(); 565 576 566 return( STDON);577 return(RPDONE); 567 578 } 568 579 … … 578 589 * details. 579 590 */ 580 void _cdecl _far _loadds add_entry(IORBH _far *first_iorb)581 { 582 IORBH _far *iorb;583 IORBH _far *next = NULL;591 void add_entry(IORBH FAR16DATA *vFirstIorb) 592 { 593 IORBH FAR16DATA *vIorb; 594 IORBH FAR16DATA *vNext = NULL; 584 595 585 596 spin_lock(drv_lock); 586 597 587 for (iorb = first_iorb; iorb != NULL; iorb = next) { 598 for (vIorb=vFirstIorb; vIorb!=NULL; vIorb=vNext) 599 { 600 IORBH *pIorb = Far16ToFlat(vIorb); 601 588 602 /* Queue this IORB. Queues primarily exist on port level but there are 589 603 * some requests which affect the whole driver, most notably 590 604 * IOCC_CONFIGURATION. In either case, adding the IORB to the driver or 591 605 * port queue will change the links, thus we need to save the original 592 * link in ' next'.606 * link in 'vNext'. 593 607 */ 594 next = (iorb->RequestControl | IORB_CHAIN) ? iorb->pNxtIORB : 0; 595 596 iorb->Status = 0; 597 iorb->ErrorCode = 0; 598 memset(&iorb->ADDWorkSpace, 0x00, sizeof(ADD_WORKSPACE)); 599 600 if (iorb_driver_level(iorb)) { 608 vNext = (pIorb->RequestControl | IORB_CHAIN) ? pIorb->pNxtIORB : NULL; 609 610 pIorb->Status = 0; 611 pIorb->ErrorCode = 0; 612 memset(&pIorb->ADDWorkSpace, 0x00, sizeof(ADD_WORKSPACE)); 613 614 if (iorb_driver_level(pIorb)) 615 { 601 616 /* driver-level IORB */ 602 iorb->UnitHandle = 0; 603 iorb_queue_add(&driver_queue, iorb); 604 605 } else { 617 pIorb->UnitHandle = 0; 618 iorb_queue_add(&driver_queue, vIorb, pIorb); 619 620 } 621 else 622 { 606 623 /* port-level IORB */ 607 int a = iorb_unit_adapter( iorb);608 int p = iorb_unit_port( iorb);609 int d = iorb_unit_device( iorb);624 int a = iorb_unit_adapter(pIorb); 625 int p = iorb_unit_port(pIorb); 626 int d = iorb_unit_device(pIorb); 610 627 611 628 if (a >= ad_info_cnt || 612 629 p > ad_infos[a].port_max || 613 630 d > ad_infos[a].ports[p].dev_max || 614 (ad_infos[a].port_map & (1UL << p)) == 0) { 631 (ad_infos[a].port_map & (1UL << p)) == 0) 632 { 615 633 616 634 /* unit handle outside of the allowed range */ 617 dprintf("warning: IORB for %d.%d.%d out of range\n", a, p, d);618 iorb->Status = IORB_ERROR;619 iorb->ErrorCode = IOERR_CMD_SYNTAX;620 iorb_complete( iorb);635 DPRINTF(0,"warning: IORB for %d.%d.%d out of range\n", a, p, d); 636 pIorb->Status = IORB_ERROR; 637 pIorb->ErrorCode = IOERR_CMD_SYNTAX; 638 iorb_complete(vIorb, pIorb); 621 639 continue; 622 640 } 623 641 624 iorb_queue_add(&ad_infos[a].ports[p].iorb_queue, iorb);642 iorb_queue_add(&ad_infos[a].ports[p].iorb_queue, vIorb, pIorb); 625 643 } 626 644 } … … 649 667 int i; 650 668 651 for (i = 0; i < 3 || !init_complete; i++) { 652 if (trigger_engine_1() == 0) { 669 for (i = 0; i < 3 || !init_complete; i++) 670 { 671 if (trigger_engine_1() == 0) 672 { 653 673 /* done -- all IORBs have been sent on their way */ 654 674 return; … … 659 679 * keep trying in the background. 660 680 */ 661 DevHelp_ArmCtxHook(0, engine_ctxhook_h);681 KernArmHook(engine_ctxhook_h, 0, 0); 662 682 } 663 683 … … 710 730 int trigger_engine_1(void) 711 731 { 712 IORBH _far *iorb; 713 IORBH _far *next; 732 IORBH FAR16DATA *vIorb; 733 IORBH *pIorb; 734 IORBH FAR16DATA *vNext; 714 735 int iorbs_sent = 0; 715 736 int a; … … 719 740 720 741 /* process driver-level IORBs */ 721 if ((iorb = driver_queue.root) != NULL && !add_workspace(iorb)->processing) { 722 send_iorb(iorb); 723 iorbs_sent++; 742 if ((vIorb = driver_queue.vRoot) != NULL) 743 { 744 pIorb = Far16ToFlat(vIorb); 745 746 if (!add_workspace(pIorb)->processing) 747 { 748 send_iorb(vIorb, pIorb); 749 iorbs_sent++; 750 } 724 751 } 725 752 726 753 /* process port-level IORBs */ 727 for (a = 0; a < ad_info_cnt; a++) { 754 for (a = 0; a < ad_info_cnt; a++) 755 { 728 756 AD_INFO *ai = ad_infos + a; 729 if (ai->busy) { 757 if (ai->busy) 758 { 730 759 /* adapter is busy; don't process any IORBs */ 731 760 continue; 732 761 } 733 for (p = 0; p <= ai->port_max; p++) { 762 for (p = 0; p <= ai->port_max; p++) 763 { 734 764 /* send all queued IORBs on this port */ 735 next = NULL; 736 for (iorb = ai->ports[p].iorb_queue.root; iorb != NULL; iorb = next) { 737 next = iorb->pNxtIORB; 738 if (!add_workspace(iorb)->processing) { 739 send_iorb(iorb); 765 vNext = NULL; 766 for (vIorb = ai->ports[p].iorb_queue.vRoot; vIorb != NULL; vIorb = vNext) 767 { 768 pIorb = Far16ToFlat(vIorb); 769 770 vNext = pIorb->pNxtIORB; 771 if (!add_workspace(pIorb)->processing) 772 { 773 send_iorb(vIorb, pIorb); 740 774 iorbs_sent++; 741 775 } … … 755 789 * functions and re-aquire it when done. 756 790 */ 757 void send_iorb(IORBH _far *iorb)791 void send_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb) 758 792 { 759 793 /* Mark IORB as "processing" before doing anything else. Once the IORB is … … 762 796 * IORB. 763 797 */ 764 add_workspace( iorb)->processing = 1;798 add_workspace(pIorb)->processing = 1; 765 799 spin_unlock(drv_lock); 766 800 767 switch ( iorb->CommandCode) {768 801 switch (pIorb->CommandCode) 802 { 769 803 case IOCC_CONFIGURATION: 770 iocc_configuration( iorb);804 iocc_configuration(vIorb, pIorb); 771 805 break; 772 806 773 807 case IOCC_DEVICE_CONTROL: 774 iocc_device_control( iorb);808 iocc_device_control(vIorb, pIorb); 775 809 break; 776 810 777 811 case IOCC_UNIT_CONTROL: 778 iocc_unit_control( iorb);812 iocc_unit_control(vIorb, pIorb); 779 813 break; 780 814 781 815 case IOCC_GEOMETRY: 782 iocc_geometry( iorb);816 iocc_geometry(vIorb, pIorb); 783 817 break; 784 818 785 819 case IOCC_EXECUTE_IO: 786 iocc_execute_io( iorb);820 iocc_execute_io(vIorb, pIorb); 787 821 break; 788 822 789 823 case IOCC_UNIT_STATUS: 790 iocc_unit_status( iorb);824 iocc_unit_status(vIorb, pIorb); 791 825 break; 792 826 793 827 case IOCC_ADAPTER_PASSTHRU: 794 iocc_adapter_passthru( iorb);828 iocc_adapter_passthru(vIorb, pIorb); 795 829 break; 796 830 797 831 default: 798 832 /* unsupported call */ 799 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);800 iorb_done( iorb);833 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 834 iorb_done(vIorb, pIorb); 801 835 break; 802 836 } … … 809 843 * Handle IOCC_CONFIGURATION requests. 810 844 */ 811 void iocc_configuration(IORBH _far *iorb)845 void iocc_configuration(IORBH FAR16DATA *vIorb, IORBH *pIorb) 812 846 { 813 847 int a; 814 848 815 switch (iorb->CommandModifier) { 849 switch (pIorb->CommandModifier) 850 { 816 851 817 852 case IOCM_COMPLETE_INIT: … … 820 855 * use interrupts, timers and context hooks instead of polling). 821 856 */ 822 if (!init_complete) { 823 dprintf("leaving initialization mode\n"); 824 for (a = 0; a < ad_info_cnt; a++) { 857 if (!init_complete) 858 { 859 DPRINTF(1,"leaving initialization mode\n"); 860 for (a = 0; a < ad_info_cnt; a++) 861 { 825 862 lock_adapter(ad_infos + a); 826 863 ahci_complete_init(ad_infos + a); … … 828 865 init_complete = 1; 829 866 830 /* DAZ turn off COM port output if on */831 //com_base = 0;832 833 867 /* release all adapters */ 834 for (a = 0; a < ad_info_cnt; a++) { 868 for (a = 0; a < ad_info_cnt; a++) 869 { 835 870 unlock_adapter(ad_infos + a); 836 871 } 872 DPRINTF(1,"leaving initialization mode 2\n"); 837 873 838 874 #ifdef LEGACY_APM … … 840 876 apm_init(); 841 877 #endif 842 843 build_user_info(0); 844 } 845 iorb_done(iorb); 878 } 879 iorb_done(vIorb, pIorb); 846 880 break; 847 881 848 882 case IOCM_GET_DEVICE_TABLE: 849 883 /* construct a device table */ 850 iocm_device_table( iorb);884 iocm_device_table(vIorb, pIorb); 851 885 break; 852 886 853 887 default: 854 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);855 iorb_done( iorb);888 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 889 iorb_done(vIorb, pIorb); 856 890 break; 857 891 } … … 861 895 * Handle IOCC_DEVICE_CONTROL requests. 862 896 */ 863 void iocc_device_control(IORBH _far *iorb) 864 { 865 AD_INFO *ai = ad_infos + iorb_unit_adapter(iorb); 866 IORBH _far *ptr; 867 IORBH _far *next = NULL; 868 int p = iorb_unit_port(iorb); 869 int d = iorb_unit_device(iorb); 870 871 switch (iorb->CommandModifier) { 897 void iocc_device_control(IORBH FAR16DATA *vIorb, IORBH *pIorb) 898 { 899 AD_INFO *ai = ad_infos + iorb_unit_adapter(pIorb); 900 IORBH FAR16DATA *vPtr; 901 IORBH FAR16DATA *vNext = NULL; 902 int p = iorb_unit_port(pIorb); 903 int d = iorb_unit_device(pIorb); 904 905 switch (pIorb->CommandModifier) 906 { 872 907 873 908 case IOCM_ABORT: 874 909 /* abort all pending commands on specified port and device */ 875 910 spin_lock(drv_lock); 876 for (ptr = ai->ports[p].iorb_queue.root; ptr != NULL; ptr = next) { 877 next = ptr->pNxtIORB; 911 for (vPtr = ai->ports[p].iorb_queue.vRoot; vPtr != NULL; vPtr = vNext) 912 { 913 IORBH *pPtr = Far16ToFlat(vPtr); 914 915 vNext = pPtr->pNxtIORB; 878 916 /* move all matching IORBs to the abort queue */ 879 if (ptr != iorb && iorb_unit_device(ptr) == d) { 880 iorb_queue_del(&ai->ports[p].iorb_queue, ptr); 881 iorb_queue_add(&abort_queue, ptr); 882 ptr->ErrorCode = IOERR_CMD_ABORTED; 917 if (vPtr != vIorb && iorb_unit_device(pPtr) == d) 918 { 919 iorb_queue_del(&ai->ports[p].iorb_queue, vPtr); 920 iorb_queue_add(&abort_queue, vPtr, pPtr); 921 pPtr->ErrorCode = IOERR_CMD_ABORTED; 883 922 } 884 923 } … … 886 925 887 926 /* trigger reset context hook which will finish the abort processing */ 888 DevHelp_ArmCtxHook(0, reset_ctxhook_h);927 KernArmHook(reset_ctxhook_h, 0, 0); 889 928 break; 890 929 … … 896 935 * and ATAPI in the same driver, this won't be required. 897 936 */ 898 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);937 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 899 938 break; 900 939 … … 904 943 /* unit control commands to lock, unlock and eject media */ 905 944 /* will be supported later... */ 906 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);945 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 907 946 break; 908 947 909 948 default: 910 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);911 break; 912 } 913 914 iorb_done( iorb);949 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 950 break; 951 } 952 953 iorb_done(vIorb, pIorb); 915 954 } 916 955 … … 918 957 * Handle IOCC_UNIT_CONTROL requests. 919 958 */ 920 void iocc_unit_control(IORBH _far *iorb)921 { 922 IORB_UNIT_CONTROL _far *iorb_uc = (IORB_UNIT_CONTROL _far *) iorb;923 int a = iorb_unit_adapter( iorb);924 int p = iorb_unit_port( iorb);925 int d = iorb_unit_device( iorb);959 void iocc_unit_control(IORBH FAR16DATA *vIorb, IORBH *pIorb) 960 { 961 IORB_UNIT_CONTROL *pIorb_uc = (IORB_UNIT_CONTROL *)pIorb; 962 int a = iorb_unit_adapter(pIorb); 963 int p = iorb_unit_port(pIorb); 964 int d = iorb_unit_device(pIorb); 926 965 927 966 spin_lock(drv_lock); 928 switch ( iorb->CommandModifier) {929 967 switch (pIorb->CommandModifier) 968 { 930 969 case IOCM_ALLOCATE_UNIT: 931 970 /* allocate unit for exclusive access */ 932 if (ad_infos[a].ports[p].devs[d].allocated) { 933 iorb_seterr(iorb, IOERR_UNIT_ALLOCATED); 934 } else { 971 if (ad_infos[a].ports[p].devs[d].allocated) 972 { 973 iorb_seterr(pIorb, IOERR_UNIT_ALLOCATED); 974 } 975 else 976 { 935 977 ad_infos[a].ports[p].devs[d].allocated = 1; 936 978 } … … 939 981 case IOCM_DEALLOCATE_UNIT: 940 982 /* deallocate exclusive access to unit */ 941 if (!ad_infos[a].ports[p].devs[d].allocated) { 942 iorb_seterr(iorb, IOERR_UNIT_NOT_ALLOCATED); 943 } else { 983 if (!ad_infos[a].ports[p].devs[d].allocated) 984 { 985 iorb_seterr(pIorb, IOERR_UNIT_NOT_ALLOCATED); 986 } 987 else 988 { 944 989 ad_infos[a].ports[p].devs[d].allocated = 0; 945 990 } … … 954 999 * IOCC_CONFIGURATION/IOCM_GET_DEVICE_TABLE calls. 955 1000 */ 956 if (!ad_infos[a].ports[p].devs[d].allocated) { 957 iorb_seterr(iorb, IOERR_UNIT_NOT_ALLOCATED); 1001 if (!ad_infos[a].ports[p].devs[d].allocated) 1002 { 1003 iorb_seterr(pIorb, IOERR_UNIT_NOT_ALLOCATED); 958 1004 break; 959 1005 } 960 ad_infos[a].ports[p].devs[d].unit_info = iorb_uc->pUnitInfo;1006 ad_infos[a].ports[p].devs[d].unit_info = pIorb_uc->pUnitInfo; 961 1007 break; 962 1008 963 1009 default: 964 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);1010 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 965 1011 break; 966 1012 } 967 1013 968 1014 spin_unlock(drv_lock); 969 iorb_done( iorb);1015 iorb_done(vIorb, pIorb); 970 1016 } 971 1017 … … 993 1039 * 1..n emulated devices; SCSI target ID increments sequentially 994 1040 */ 995 void iocm_device_table(IORBH _far *iorb) 996 { 997 IORB_CONFIGURATION _far *iorb_conf; 998 DEVICETABLE _far *dt; 999 char _far *pos; 1041 void iocm_device_table(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1042 { 1043 IORB_CONFIGURATION *pIorb_conf; 1044 DEVICETABLE FAR16DATA *vDt; 1045 DEVICETABLE *pDt; 1046 char *pPos; 1000 1047 int scsi_units = 0; 1001 1048 int scsi_id = 1; … … 1006 1053 int d; 1007 1054 1008 iorb_conf = (IORB_CONFIGURATION _far *) iorb; 1009 dt = iorb_conf->pDeviceTable; 1055 pIorb_conf = (IORB_CONFIGURATION *)pIorb; 1056 vDt = pIorb_conf->pDeviceTable; 1057 pDt = Far16ToFlat(vDt); 1010 1058 1011 1059 spin_lock(drv_lock); 1012 1060 1013 1061 /* initialize device table header */ 1014 dt->ADDLevelMajor = ADD_LEVEL_MAJOR;1015 dt->ADDLevelMinor = ADD_LEVEL_MINOR;1016 dt->ADDHandle = add_handle;1017 dt->TotalAdapters = ad_info_cnt + 1;1062 pDt->ADDLevelMajor = ADD_LEVEL_MAJOR; 1063 pDt->ADDLevelMinor = ADD_LEVEL_MINOR; 1064 pDt->ADDHandle = add_handle; 1065 pDt->TotalAdapters = ad_info_cnt + 1; 1018 1066 1019 1067 /* set start of adapter and device information tables */ 1020 p os = (char _far *) (dt->pAdapter + dt->TotalAdapters);1068 pPos = (char*)&pDt->pAdapter[pDt->TotalAdapters]; 1021 1069 1022 1070 /* go through all adapters, including the virtual SCSI adapter */ 1023 for (dta = 0; dta < dt->TotalAdapters; dta++) { 1024 ADAPTERINFO _far *ptr = (ADAPTERINFO _far *) pos; 1071 for (dta = 0; dta < pDt->TotalAdapters; dta++) 1072 { 1073 ADAPTERINFO *pPtr = (ADAPTERINFO *)pPos; 1025 1074 1026 1075 /* sanity check for sufficient space in device table */ 1027 if ((u32) (ptr + 1) - (u32) dt > iorb_conf->DeviceTableLen) { 1028 dprintf("error: device table provided by DASD too small\n"); 1029 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 1076 if ((u32)(pPtr + 1) - (u32)pDt > pIorb_conf->DeviceTableLen) 1077 { 1078 DPRINTF(0,"error: device table provided by DASD too small\n"); 1079 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 1030 1080 goto iocm_device_table_done; 1031 1081 } 1032 1082 1033 dt->pAdapter[dta] = (ADAPTERINFO _near *) ((u32) ptr & 0xffff); 1034 memset(ptr, 0x00, sizeof(*ptr)); 1035 1036 ptr->AdapterIOAccess = AI_IOACCESS_BUS_MASTER; 1037 ptr->AdapterHostBus = AI_HOSTBUS_OTHER | AI_BUSWIDTH_32BIT; 1038 ptr->AdapterFlags = AF_16M | AF_HW_SCATGAT; 1039 ptr->MaxHWSGList = AHCI_MAX_SG / 2; /* AHCI S/G elements are 22 bits */ 1040 1041 if (dta < ad_info_cnt) { 1083 pDt->pAdapter[dta] = MakeNear16PtrFromDiff(pIorb_conf->pDeviceTable, pDt, pPtr); 1084 1085 //DPRINTF(2,"iocm_device_table: ptr=%x dta=%x pAdapter[dta]=%x pDeviceTable=%x\n", 1086 // ptr, dta, dt->pAdapter[dta], iorb_conf->pDeviceTable); 1087 memset(pPtr, 0x00, sizeof(*pPtr)); 1088 1089 pPtr->AdapterIOAccess = AI_IOACCESS_BUS_MASTER; 1090 pPtr->AdapterHostBus = AI_HOSTBUS_OTHER | AI_BUSWIDTH_32BIT; 1091 pPtr->AdapterFlags = AF_16M | AF_HW_SCATGAT; 1092 pPtr->MaxHWSGList = AHCI_MAX_SG / 2; /* AHCI S/G elements are 22 bits */ 1093 1094 if (dta < ad_info_cnt) 1095 { 1042 1096 /* this is a physical AHCI adapter */ 1043 1097 AD_INFO *ad_info = ad_infos + dta; 1044 1098 1045 ptr->AdapterDevBus = AI_DEVBUS_ST506 | AI_DEVBUS_32BIT; 1046 sprintf(ptr->AdapterName, "AHCI_%d", dta); 1047 1048 if (!ad_info->port_scan_done) { 1099 pPtr->AdapterDevBus = AI_DEVBUS_ST506 | AI_DEVBUS_32BIT; 1100 sprintf(pPtr->AdapterName, "AHCI_%d", dta); 1101 1102 if (!ad_info->port_scan_done) 1103 { 1049 1104 /* first call; need to scan AHCI hardware for devices */ 1050 if (ad_info->busy) { 1051 dprintf("error: port scan requested while adapter was busy\n"); 1052 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 1105 if (ad_info->busy) 1106 { 1107 DPRINTF(0,"error: port scan requested while adapter was busy\n"); 1108 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 1053 1109 goto iocm_device_table_done; 1054 1110 } … … 1059 1115 ad_info->busy = 0; 1060 1116 1061 if (rc != 0) { 1062 dprintf("error: port scan failed on adapter #%d\n", dta); 1063 iorb_seterr(iorb, IOERR_CMD_SW_RESOURCE); 1117 if (rc != 0) 1118 { 1119 DPRINTF(0,"error: port scan failed on adapter #%d\n", dta); 1120 iorb_seterr(pIorb, IOERR_CMD_SW_RESOURCE); 1064 1121 goto iocm_device_table_done; 1065 1122 } … … 1068 1125 1069 1126 /* insert physical (i.e. AHCI) devices into the device table */ 1070 for (p = 0; p <= ad_info->port_max; p++) { 1071 for (d = 0; d <= ad_info->ports[p].dev_max; d++) { 1072 if (ad_info->ports[p].devs[d].present) { 1073 if (ad_info->ports[p].devs[d].atapi && emulate_scsi[dta][p]) { 1127 for (p = 0; p <= ad_info->port_max; p++) 1128 { 1129 for (d = 0; d <= ad_info->ports[p].dev_max; d++) 1130 { 1131 if (ad_info->ports[p].devs[d].present) 1132 { 1133 if (ad_info->ports[p].devs[d].atapi && emulate_scsi[dta][p]) 1134 { 1074 1135 /* report this unit as SCSI unit */ 1075 1136 scsi_units++; 1076 1137 //continue; 1077 1138 } 1078 if (add_unit_info(iorb_conf, dta, dta, p, d, 0)) { 1139 if (add_unit_info(pIorb_conf, dta, dta, p, d, 0)) 1140 { 1079 1141 goto iocm_device_table_done; 1080 1142 } … … 1082 1144 } 1083 1145 } 1084 1085 } else { 1146 } 1147 else 1148 { 1086 1149 /* this is the virtual SCSI adapter */ 1087 if (scsi_units == 0) { 1150 if (scsi_units == 0) 1151 { 1088 1152 /* not a single unit to be emulated via SCSI */ 1089 dt->TotalAdapters--;1153 pDt->TotalAdapters--; 1090 1154 break; 1091 1155 } 1092 1156 1093 1157 /* set adapter name and bus type to mimic a SCSI controller */ 1094 p tr->AdapterDevBus = AI_DEVBUS_SCSI_2 | AI_DEVBUS_16BIT;1095 sprintf(p tr->AdapterName, "AHCI_SCSI_0");1158 pPtr->AdapterDevBus = AI_DEVBUS_SCSI_2 | AI_DEVBUS_16BIT; 1159 sprintf(pPtr->AdapterName, "AHCI_SCSI_0"); 1096 1160 1097 1161 /* add all ATAPI units to be emulated by this virtual adaper */ 1098 for (a = 0; a < ad_info_cnt; a++) { 1162 for (a = 0; a < ad_info_cnt; a++) 1163 { 1099 1164 AD_INFO *ad_info = ad_infos + a; 1100 1165 1101 for (p = 0; p <= ad_info->port_max; p++) { 1102 for (d = 0; d <= ad_info->ports[p].dev_max; d++) { 1103 if (ad_info->ports[p].devs[d].present && ad_info->ports[p].devs[d].atapi && emulate_scsi[a][p]) { 1104 if (add_unit_info(iorb_conf, dta, a, p, d, scsi_id++)) { 1166 for (p = 0; p <= ad_info->port_max; p++) 1167 { 1168 for (d = 0; d <= ad_info->ports[p].dev_max; d++) 1169 { 1170 if (ad_info->ports[p].devs[d].present && ad_info->ports[p].devs[d].atapi && emulate_scsi[a][p]) 1171 { 1172 if (add_unit_info(pIorb_conf, dta, a, p, d, scsi_id++)) 1173 { 1105 1174 goto iocm_device_table_done; 1106 1175 } … … 1112 1181 1113 1182 /* calculate offset for next adapter */ 1114 p os = (char _far *) (ptr->UnitInfo + ptr->AdapterUnits);1183 pPos = (char *)(pPtr->UnitInfo + pPtr->AdapterUnits); 1115 1184 } 1116 1185 1117 1186 iocm_device_table_done: 1118 1187 spin_unlock(drv_lock); 1119 iorb_done( iorb);1188 iorb_done(vIorb, pIorb); 1120 1189 } 1121 1190 … … 1123 1192 * Handle IOCC_GEOMETRY requests. 1124 1193 */ 1125 void iocc_geometry(IORBH _far *iorb)1126 { 1127 switch ( iorb->CommandModifier) {1128 1194 void iocc_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1195 { 1196 switch (pIorb->CommandModifier) 1197 { 1129 1198 case IOCM_GET_MEDIA_GEOMETRY: 1130 1199 case IOCM_GET_DEVICE_GEOMETRY: 1131 add_workspace( iorb)->idempotent = 1;1132 ahci_get_geometry( iorb);1200 add_workspace(pIorb)->idempotent = 1; 1201 ahci_get_geometry(vIorb, pIorb); 1133 1202 break; 1134 1203 1135 1204 default: 1136 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);1137 iorb_done( iorb);1205 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 1206 iorb_done(vIorb, pIorb); 1138 1207 } 1139 1208 } … … 1142 1211 * Handle IOCC_EXECUTE_IO requests. 1143 1212 */ 1144 void iocc_execute_io(IORBH _far *iorb)1145 { 1146 switch ( iorb->CommandModifier) {1147 1213 void iocc_execute_io(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1214 { 1215 switch (pIorb->CommandModifier) 1216 { 1148 1217 case IOCM_READ: 1149 add_workspace( iorb)->idempotent = 1;1150 ahci_read( iorb);1218 add_workspace(pIorb)->idempotent = 1; 1219 ahci_read(vIorb, pIorb); 1151 1220 break; 1152 1221 1153 1222 case IOCM_READ_VERIFY: 1154 add_workspace( iorb)->idempotent = 1;1155 ahci_verify( iorb);1223 add_workspace(pIorb)->idempotent = 1; 1224 ahci_verify(vIorb, pIorb); 1156 1225 break; 1157 1226 1158 1227 case IOCM_WRITE: 1159 add_workspace( iorb)->idempotent = 1;1160 ahci_write( iorb);1228 add_workspace(pIorb)->idempotent = 1; 1229 ahci_write(vIorb, pIorb); 1161 1230 break; 1162 1231 1163 1232 case IOCM_WRITE_VERIFY: 1164 add_workspace( iorb)->idempotent = 1;1165 ahci_write( iorb);1233 add_workspace(pIorb)->idempotent = 1; 1234 ahci_write(vIorb, pIorb); 1166 1235 break; 1167 1236 1168 1237 default: 1169 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);1170 iorb_done( iorb);1238 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 1239 iorb_done(vIorb, pIorb); 1171 1240 } 1172 1241 } … … 1175 1244 * Handle IOCC_UNIT_STATUS requests. 1176 1245 */ 1177 void iocc_unit_status(IORBH _far *iorb)1178 { 1179 switch ( iorb->CommandModifier) {1180 1246 void iocc_unit_status(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1247 { 1248 switch (pIorb->CommandModifier) 1249 { 1181 1250 case IOCM_GET_UNIT_STATUS: 1182 add_workspace( iorb)->idempotent = 1;1183 ahci_unit_ready( iorb);1251 add_workspace(pIorb)->idempotent = 1; 1252 ahci_unit_ready(vIorb, pIorb); 1184 1253 break; 1185 1254 1186 1255 default: 1187 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);1188 iorb_done( iorb);1256 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 1257 iorb_done(vIorb, pIorb); 1189 1258 } 1190 1259 } … … 1193 1262 * Handle IOCC_ADAPTER_PASSTHROUGH requests. 1194 1263 */ 1195 void iocc_adapter_passthru(IORBH _far *iorb) 1196 { 1197 switch (iorb->CommandModifier) { 1264 void iocc_adapter_passthru(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1265 { 1266 switch (pIorb->CommandModifier) 1267 { 1198 1268 1199 1269 case IOCM_EXECUTE_CDB: 1200 add_workspace( iorb)->idempotent = 0;1201 ahci_execute_cdb( iorb);1270 add_workspace(pIorb)->idempotent = 0; 1271 ahci_execute_cdb(vIorb, pIorb); 1202 1272 break; 1203 1273 1204 1274 case IOCM_EXECUTE_ATA: 1205 add_workspace( iorb)->idempotent = 0;1206 ahci_execute_ata( iorb);1275 add_workspace(pIorb)->idempotent = 0; 1276 ahci_execute_ata(vIorb, pIorb); 1207 1277 break; 1208 1278 1209 1279 default: 1210 iorb_seterr( iorb, IOERR_CMD_NOT_SUPPORTED);1211 iorb_done( iorb);1280 iorb_seterr(pIorb, IOERR_CMD_NOT_SUPPORTED); 1281 iorb_done(vIorb, pIorb); 1212 1282 } 1213 1283 } … … 1217 1287 * adapter-level spinlock aquired. 1218 1288 */ 1219 void iorb_queue_add(IORB_QUEUE _far *queue, IORBH _far *iorb) 1220 { 1221 if (iorb_priority(iorb) { 1289 void iorb_queue_add(IORB_QUEUE *queue, IORBH FAR16DATA *vIorb, IORBH *pIorb) 1290 { 1291 if (iorb_priority(pIorb) 1292 { 1222 1293 /* priority IORB; insert at first position */ 1223 iorb->pNxtIORB = queue->root; 1224 queue->root = iorb; 1225 1226 } else { 1294 pIorb->pNxtIORB = queue->vRoot; 1295 queue->vRoot = vIorb; 1296 } 1297 else 1298 { 1227 1299 /* append IORB to end of queue */ 1228 iorb->pNxtIORB = NULL; 1229 1230 if (queue->root == NULL) { 1231 queue->root = iorb; 1232 } else { 1233 queue->tail->pNxtIORB = iorb; 1234 } 1235 queue->tail = iorb; 1236 } 1237 1238 if (debug) { 1300 pIorb->pNxtIORB = NULL; 1301 1302 if (queue->vRoot == NULL) 1303 { 1304 queue->vRoot = vIorb; 1305 } 1306 else 1307 { 1308 ((IORBH *)Far16ToFlat(queue->vTail))->pNxtIORB = vIorb; 1309 } 1310 queue->vTail = vIorb; 1311 } 1312 1313 if (D32g_DbgLevel) 1314 { 1239 1315 /* determine queue type (local, driver, abort or port) and minimum debug 1240 1316 * level; otherwise, queue debug prints can become really confusing. … … 1243 1319 int min_debug = 1; 1244 1320 1245 if ((u32) queue >> 16 == (u32) (void _far *) &queue >> 16) { 1321 if ((u32)queue >> 16 == (u32)&queue >> 16) /* DAZ this is bogus */ 1322 { 1246 1323 /* this queue is on the stack */ 1247 1324 queue_type = "local"; 1248 1325 min_debug = 2; 1249 1326 1250 } else if (queue == &driver_queue) { 1327 } 1328 else if (queue == &driver_queue) 1329 { 1251 1330 queue_type = "driver"; 1252 1331 1253 } else if (queue == &abort_queue) { 1332 } 1333 else if (queue == &abort_queue) 1334 { 1254 1335 queue_type = "abort"; 1255 1336 min_debug = 2; 1256 1337 1257 } else { 1338 } 1339 else 1340 { 1258 1341 queue_type = "port"; 1259 1342 } 1260 1343 1261 if (debug > min_debug) { 1262 aprintf("IORB %Fp queued (cmd = %d/%d, queue = %Fp [%s], timeout = %ld)\n", 1263 iorb, iorb->CommandCode, iorb->CommandModifier, queue, queue_type, 1264 iorb->Timeout); 1265 } 1344 DPRINTF(min_debug,"IORB %x queued (cmd=%d/%d queue=%x [%s], timeout=%d)\n", 1345 vIorb, pIorb->CommandCode, pIorb->CommandModifier, queue, queue_type, 1346 pIorb->Timeout); 1266 1347 } 1267 1348 } … … 1271 1352 * the adapter-level spinlock aquired. 1272 1353 */ 1273 int iorb_queue_del(IORB_QUEUE _far *queue, IORBH _far *iorb)1274 { 1275 IORBH _far *_iorb;1276 IORBH _far *_prev = NULL;1354 int iorb_queue_del(IORB_QUEUE *queue, IORBH FAR16DATA *vIorb) 1355 { 1356 IORBH FAR16DATA *_vIorb; 1357 IORBH FAR16DATA *_vPrev = NULL; 1277 1358 int found = 0; 1278 1359 1279 for (_iorb = queue->root; _iorb != NULL; _iorb = _iorb->pNxtIORB) { 1280 if (_iorb == iorb) { 1360 for (_vIorb = queue->vRoot; _vIorb != NULL; ) 1361 { 1362 IORBH *_pIorb = Far16ToFlat(_vIorb); 1363 if (_vIorb == vIorb) 1364 { 1365 1281 1366 /* found the IORB to be removed */ 1282 if (_prev != NULL) { 1283 _prev->pNxtIORB = _iorb->pNxtIORB; 1284 } else { 1285 queue->root = _iorb->pNxtIORB; 1286 } 1287 if (_iorb == queue->tail) { 1288 queue->tail = _prev; 1367 if (_vPrev != NULL) 1368 { 1369 ((IORBH*)Far16ToFlat(_vPrev))->pNxtIORB = _pIorb->pNxtIORB; 1370 } 1371 else 1372 { 1373 queue->vRoot = _pIorb->pNxtIORB; 1374 } 1375 if (_vIorb == queue->vTail) 1376 { 1377 queue->vTail = _vPrev; 1289 1378 } 1290 1379 found = 1; 1291 1380 break; 1292 1381 } 1293 _prev = _iorb; 1294 } 1295 1296 if (found) { 1297 ddprintf("IORB %Fp removed (queue = %Fp)\n", iorb, queue); 1298 } else { 1299 dprintf("IORB %Fp not found in queue %Fp\n", iorb, queue); 1382 _vPrev = _vIorb; 1383 _vIorb = _pIorb->pNxtIORB; 1384 } 1385 1386 if (found) 1387 { 1388 DPRINTF(3,"IORB %x removed (queue = %x)\n", vIorb, queue); 1389 } 1390 else 1391 { 1392 DPRINTF(2,"IORB %x not found in queue %x\n", vIorb, queue); 1300 1393 } 1301 1394 … … 1309 1402 * status to the specified error code. 1310 1403 */ 1311 void iorb_seterr(IORBH _far *iorb, USHORT error_code)1312 { 1313 iorb->ErrorCode = error_code;1314 iorb->Status |= IORB_ERROR;1404 void iorb_seterr(IORBH *pIorb, USHORT error_code) 1405 { 1406 pIorb->ErrorCode = error_code; 1407 pIorb->Status |= IORB_ERROR; 1315 1408 } 1316 1409 … … 1333 1426 * See abort_ctxhook() for an example. 1334 1427 */ 1335 void iorb_done(IORBH _far *iorb)1336 { 1337 int a = iorb_unit_adapter( iorb);1338 int p = iorb_unit_port( iorb);1428 void iorb_done(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1429 { 1430 int a = iorb_unit_adapter(pIorb); 1431 int p = iorb_unit_port(pIorb); 1339 1432 1340 1433 /* remove IORB from corresponding queue */ 1341 1434 spin_lock(drv_lock); 1342 if (iorb_driver_level(iorb)) { 1343 iorb_queue_del(&driver_queue, iorb); 1344 } else { 1345 iorb_queue_del(&ad_infos[a].ports[p].iorb_queue, iorb); 1346 } 1347 aws_free(add_workspace(iorb)); 1435 if (iorb_driver_level(pIorb)) 1436 { 1437 iorb_queue_del(&driver_queue, vIorb); 1438 } 1439 else 1440 { 1441 iorb_queue_del(&ad_infos[a].ports[p].iorb_queue, vIorb); 1442 } 1443 aws_free(add_workspace(pIorb)); 1348 1444 spin_unlock(drv_lock); 1349 1445 1350 iorb_complete( iorb);1446 iorb_complete(vIorb, pIorb); 1351 1447 } 1352 1448 … … 1358 1454 * the next request to us before even returning from this function. 1359 1455 */ 1360 void iorb_complete(IORBH _far *iorb) 1361 { 1362 iorb->Status |= IORB_DONE; 1363 1364 ddprintf("IORB %Fp complete (status = 0x%04x, error = 0x%04x)\n", 1365 iorb, iorb->Status, iorb->ErrorCode); 1366 1367 if (iorb->RequestControl & IORB_ASYNC_POST) { 1368 iorb->NotifyAddress(iorb); 1456 void iorb_complete(IORBH FAR16DATA *vIorb, IORBH *pIorb) 1457 { 1458 pIorb->Status |= IORB_DONE; 1459 1460 DPRINTF(1,"IORB %x complete status=0x%04x error=0x%04x\n", 1461 vIorb, pIorb->Status, pIorb->ErrorCode); 1462 1463 if (pIorb->RequestControl & IORB_ASYNC_POST) 1464 { 1465 Dev32Help_CallFar16((PFNFAR16)pIorb->NotifyAddress, vIorb); 1369 1466 } 1370 1467 } … … 1379 1476 * - no_ncq 1380 1477 */ 1381 void iorb_requeue(IORBH _far *iorb)1382 { 1383 ADD_WORKSPACE _far *aws = add_workspace(iorb);1478 void iorb_requeue(IORBH *pIorb) 1479 { 1480 ADD_WORKSPACE *aws = add_workspace(pIorb); 1384 1481 u16 no_ncq = aws->no_ncq; 1385 1482 u16 unaligned = aws->unaligned; … … 1398 1495 * be called with the spinlock held to prevent race conditions. 1399 1496 */ 1400 void aws_free(ADD_WORKSPACE _far *aws) 1401 { 1402 if (aws->timer != 0) { 1403 ADD_CancelTimer(aws->timer); 1497 void aws_free(ADD_WORKSPACE *aws) 1498 { 1499 if (aws->timer != 0) 1500 { 1501 Timer_CancelTimer(aws->timer); 1404 1502 aws->timer = 0; 1405 1503 } 1406 1504 1407 if (aws->buf != NULL) { 1408 free(aws->buf); 1505 if (aws->buf != NULL) 1506 { 1507 MemFree(aws->buf); 1409 1508 aws->buf = NULL; 1410 1509 } … … 1421 1520 1422 1521 spin_lock(drv_lock); 1423 while (ai->busy) { 1522 while (ai->busy) 1523 { 1424 1524 spin_unlock(drv_lock); 1425 timer_init(&Timer, 250);1426 while (! timer_check_and_block(&Timer));1525 TimerInit(&Timer, 250); 1526 while (!TimerCheckAndBlock(&Timer)); 1427 1527 spin_lock(drv_lock); 1428 1528 } … … 1444 1544 * separate function which is invoked via a context hook. 1445 1545 */ 1446 void _cdecl _far timeout_callback(ULONG timer_handle, ULONG p1, ULONG p2) 1447 { 1448 IORBH _far *iorb = (IORBH _far *) p1; 1449 int a = iorb_unit_adapter(iorb); 1450 int p = iorb_unit_port(iorb); 1451 1452 ADD_CancelTimer(timer_handle); 1453 dprintf("timeout for IORB %Fp\n", iorb); 1546 void __syscall timeout_callback(ULONG timer_handle, ULONG p1) 1547 { 1548 IORBH FAR16DATA *vIorb = (IORBH FAR16DATA *)CastULONGToFar16(p1); 1549 IORBH *pIorb = Far16ToFlat(vIorb); 1550 int a = iorb_unit_adapter(pIorb); 1551 int p = iorb_unit_port(pIorb); 1552 1553 Timer_CancelTimer(timer_handle); 1554 DPRINTF(0,"timeout for IORB %x\n", vIorb); 1454 1555 1455 1556 /* Move the timed-out IORB to the abort queue. Since it's possible that the … … 1460 1561 */ 1461 1562 spin_lock(drv_lock); 1462 if (iorb_queue_del(&ad_infos[a].ports[p].iorb_queue, iorb) == 0) { 1463 iorb_queue_add(&abort_queue, iorb); 1464 iorb->ErrorCode = IOERR_ADAPTER_TIMEOUT; 1563 if (iorb_queue_del(&ad_infos[a].ports[p].iorb_queue, vIorb) == 0) 1564 { 1565 iorb_queue_add(&abort_queue, vIorb, pIorb); 1566 pIorb->ErrorCode = IOERR_ADAPTER_TIMEOUT; 1465 1567 } 1466 1568 spin_unlock(drv_lock); … … 1477 1579 * will process our IORB as well. 1478 1580 */ 1479 DevHelp_ArmCtxHook(0, reset_ctxhook_h);1581 KernArmHook(reset_ctxhook_h, 0, 0); 1480 1582 1481 1583 /* Set up a watchdog timer which calls the context hook manually in case … … 1485 1587 * does in the early boot phase. 1486 1588 */ 1487 ADD_StartTimerMS(&th_reset_watchdog, 5000, (PFN) reset_watchdog, 0, 0);1589 Timer_StartTimerMS(&th_reset_watchdog, 5000, reset_watchdog, 0); 1488 1590 } 1489 1591 … … 1499 1601 * problems during the early boot phase. 1500 1602 */ 1501 void _ cdecl _far reset_watchdog(ULONG timer_handle, ULONG p1, ULONG p2)1603 void __syscall reset_watchdog(ULONG timer_handle, ULONG p1) 1502 1604 { 1503 1605 /* reset watchdog timer */ 1504 ADD_CancelTimer(timer_handle);1505 dprintf("reset watchdog invoked\n");1606 Timer_CancelTimer(timer_handle); 1607 DPRINTF(0,"reset watchdog invoked\n"); 1506 1608 1507 1609 /* call context hook manually */ 1508 1610 reset_ctxhook(0); 1509 }1510 1511 /******************************************************************************1512 * small_code_ - this dummy func resolves the undefined reference linker1513 * error that occurrs when linking WATCOM objects with DDK's link.exe1514 */1515 void _cdecl small_code_(void)1516 {1517 1611 } 1518 1612 … … 1528 1622 * index of the virtual SCSI adapter. 1529 1623 */ 1530 static int add_unit_info(IORB_CONFIGURATION _far *iorb_conf, int dta,1624 static int add_unit_info(IORB_CONFIGURATION *pIorb_conf, int dta, 1531 1625 int a, int p, int d, int scsi_id) 1532 1626 { 1533 DEVICETABLE _far *dt = iorb_conf->pDeviceTable; 1534 ADAPTERINFO _far *ptr = (ADAPTERINFO _far *) (((u32) dt & 0xffff0000U) + 1535 (u16) dt->pAdapter[dta]); 1536 UNITINFO _far *ui = ptr->UnitInfo + ptr->AdapterUnits; 1627 DEVICETABLE *pDt = Far16ToFlat(pIorb_conf->pDeviceTable); 1628 ADAPTERINFO *pPtr; 1629 UNITINFO *pUi; 1537 1630 AD_INFO *ai = ad_infos + a; 1538 1631 1539 if ((u32) (ui + 1) - (u32) dt > iorb_conf->DeviceTableLen) { 1540 dprintf("error: device table provided by DASD too small\n"); 1541 iorb_seterr(&iorb_conf->iorbh, IOERR_CMD_SW_RESOURCE); 1632 pPtr = (ADAPTERINFO *)MakeFlatFromNear16(pIorb_conf->pDeviceTable, pDt->pAdapter[dta]); 1633 //DPRINTF(2,"add_unit_info: ptr=%x dta=%x pAdapter[dta]=%x pDeviceTable=%x\n", 1634 // ptr, dta, dt->pAdapter[dta], iorb_conf->pDeviceTable); 1635 1636 pUi = &pPtr->UnitInfo[pPtr->AdapterUnits]; 1637 1638 if ((u32)(pUi + 1) - (u32)pDt > pIorb_conf->DeviceTableLen) 1639 { 1640 DPRINTF(0,"error: device table provided by DASD too small\n"); 1641 iorb_seterr(&pIorb_conf->iorbh, IOERR_CMD_SW_RESOURCE); 1542 1642 return(-1); 1543 1643 } 1544 1644 1545 if (ai->ports[p].devs[d].unit_info == NULL) { 1645 if (ai->ports[p].devs[d].unit_info == NULL) 1646 { 1546 1647 /* provide original information about this device (unit) */ 1547 memset(ui, 0x00, sizeof(*ui)); 1548 ui->AdapterIndex = dta; /* device table adapter index */ 1549 ui->UnitHandle = iorb_unit(a, p, d); /* physical adapter index */ 1550 ui->UnitIndex = ptr->AdapterUnits; 1551 ui->UnitType = ai->ports[p].devs[d].dev_type; 1552 ui->QueuingCount = ai->ports[p].devs[d].ncq_max;; 1553 if (ai->ports[p].devs[d].removable) { 1554 ui->UnitFlags |= UF_REMOVABLE; 1648 memset(pUi, 0x00, sizeof(*pUi)); 1649 pUi->AdapterIndex = dta; /* device table adapter index */ 1650 pUi->UnitHandle = iorb_unit(a, p, d); /* physical adapter index */ 1651 pUi->UnitIndex = pPtr->AdapterUnits; 1652 pUi->UnitType = ai->ports[p].devs[d].dev_type; 1653 pUi->QueuingCount = ai->ports[p].devs[d].ncq_max; 1654 if (ai->ports[p].devs[d].removable) 1655 { 1656 pUi->UnitFlags |= UF_REMOVABLE; 1555 1657 } 1556 1658 if (scsi_id > 0) { 1557 1659 /* set fake SCSI ID for this unit */ 1558 ui->UnitSCSITargetID = scsi_id; 1559 } 1560 } else { 1660 pUi->UnitSCSITargetID = scsi_id; 1661 } 1662 } 1663 else 1664 { 1561 1665 /* copy updated device (unit) information (IOCM_CHANGE_UNITINFO) */ 1562 memcpy( ui, ai->ports[p].devs[d].unit_info, sizeof(*ui));1563 } 1564 1565 p tr->AdapterUnits++;1666 memcpy(pUi, ai->ports[p].devs[d].unit_info, sizeof(*pUi)); 1667 } 1668 1669 pPtr->AdapterUnits++; 1566 1670 return(0); 1567 1671 } 1568 1672 1569 /*******************************************************************************1570 * Register kernel exit handler for trap dumps. Our exit handler will be called1571 * right before the kernel starts a dump; that's where we reset the controller1572 * so it supports BIOS int13 I/O calls.1573 */1574 static void register_krnl_exit(void)1575 {1576 _asm {1577 push ds1578 push es1579 push bx1580 push si1581 push di1582 1583 mov ax, FLAG_KRNL_EXIT_ADD1584 mov cx, TYPE_KRNL_EXIT_INT131585 mov bx, SEG asm_krnl_exit1586 mov si, OFFSET asm_krnl_exit1587 mov dl, DevHlp_RegisterKrnlExit1588 1589 call dword ptr [Device_Help]1590 1591 pop di1592 pop si1593 pop bx1594 pop es1595 pop ds1596 }1597 1598 dprintf("Registered kernel exit routine for INT13 mode\n");1599 }1600 -
trunk/src/os2ahci/os2ahci.h
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-201 5David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 43 43 //#define LEGACY_APM 44 44 45 #define INCL_NOPMAPI 46 #define INCL_DOSINFOSEG 47 #define INCL_NO_SCB 48 #define INCL_DOSERRORS 49 #include <os2.h> 50 #include <dos.h> 51 #include <bseerr.h> 52 #include <dskinit.h> 53 #include <scb.h> 54 55 #include <devhdr.h> 56 #include <iorb.h> 57 #include <strat2.h> 58 #include <reqpkt.h> 59 60 /* NOTE: (Rousseau) 61 * The regular dhcalls.h from $(DDK)\base\h also works. 62 * The devhelp.h from $(DDK)\base\h produces inline assembler errors. 63 * The modified devhelp.h from ..\include works OK and is used because it 64 * generates a slightly smaller driver image. 65 */ 66 #ifdef __WATCOMC__ 67 /* include WATCOM specific DEVHELP stubs */ 68 #include <devhelp.h> 69 #else 70 #include <dhcalls.h> 71 #endif 72 73 #include <addcalls.h> 74 #include <rmcalls.h> 75 #include <devclass.h> 76 #include <devcmd.h> 77 #include <rmbase.h> 78 45 #include "Dev32lib.h" 46 #include "Dev32rmcalls.h" 47 #include <Dev32iorb.h> 79 48 #include "ahci.h" 80 49 #include "ahci-idc.h" … … 82 51 /* -------------------------- macros and constants ------------------------- */ 83 52 84 #define MAX_AD 8 /* maximum number of adapters */ 85 86 /* Timer pool size. In theory, we need one timer per outstanding command plus 87 * a few miscellaneous timers but it's unlikely we'll ever have outstanding 88 * commands on all devices on all ports on all apapters -- this would be 89 * 8 * 32 * 32 = 8192 outstanding commands on a maximum of 8 * 32 * 15 = 3840 90 * devices and that's a bit of an exaggeration. It should be more than enough 91 * to have 128 timers. 92 */ 93 #define TIMER_COUNT 128 94 #define TIMER_POOL_SIZE (sizeof(ADD_TIMER_POOL) + \ 95 TIMER_COUNT * sizeof(ADD_TIMER_DATA)) 53 #define MAX_AD 8 /* maximum number of adapters */ 54 55 #define TIMER_COUNT 128 96 56 97 57 /* default command timeout (can be overwritten in the IORB) */ 98 #define DEFAULT_TIMEOUT 58 #define DEFAULT_TIMEOUT 30000 99 59 100 60 /* Maximum number of retries for commands in the restart/reset context hooks. … … 106 66 * bit left before the ADD workspace structure would become too large... 107 67 */ 108 #define MAX_RETRIES 3 109 110 /* max/min macros */ 111 #define max(a, b) (a) > (b) ? (a) : (b) 112 #define min(a, b) (a) < (b) ? (a) : (b) 68 #define MAX_RETRIES 3 113 69 114 70 /* debug output macros */ 115 71 #ifdef DEBUG 116 #define dprintf if (debug > 0) printf 117 #define dphex if (debug > 0) phex 118 #define ddprintf if (debug > 1) printf 119 #define ddphex if (debug > 1) phex 120 #define dddprintf if (debug > 2) printf 121 #define dddphex if (debug > 2) phex 122 #define ntprintf printf_nts 123 #define aprintf printf 72 #define DPRINTF(a,b,...) dprintf(a, b, ##__VA_ARGS__) 73 #define DHEXDUMP(a,b,c,d,...) dHexDump(a, b, c, d, ##__VA_ARGS__) 74 #define NTPRINTF(...) dprintf(0, ##__VA_ARGS__) 75 #define DUMP_HOST_REGS(l,a,b) {if (D32g_DbgLevel>=l) ahci_dump_host_regs(a,b);} 76 #define DUMP_PORT_REGS(l,a,b) {if (D32g_DbgLevel>=l) ahci_dump_port_regs(a,b);} 124 77 #else 125 #define dprintf(a,...) 126 #define dphex(a,b,c,...) 127 #define ddprintf(a,...) 128 #define ddphex(a,b,c,...) 129 #define dddprintf(a,...) 130 #define dddphex(a,b,c,...) 131 #define ntprintf(a,...) 132 #define aprintf(a,...) 78 #define DPRINTF(a,b,...) 79 #define DHEXDUMP(a,b,c,d,...) 80 #define NTPRINTF(a,...) 81 #define DUMP_HOST_REGS(a,b) 82 #define DUMP_PORT_REGS(l,a,b) 133 83 #endif 134 84 … … 137 87 * with vprintf-like funcs) 138 88 */ 139 #define ciprintf if (verbosity > 0) cprintf 140 #define ciiprintf if (verbosity > 1) cprintf 141 142 /* TRACE macros (for our internal ring buffer trace) */ 143 #define AHCI_DEBUG_BUF_SIZE 0x10000UL /* 64k must be a power of 2 */ 144 #define AHCI_INFO_BUF_SIZE 0x1000UL /* 4k must be a power of 2 */ 89 #define ciprintf(a,...) {if (verbosity > 0) iprintf(a, ##__VA_ARGS__);} 90 #define ciiprintf(a,...) {if (verbosity > 1) iprintf(a, ##__VA_ARGS__);} 145 91 146 92 /* adapter number from AD_INFO pointer; mainly for dprintf() purposes */ 147 #define ad_no(ai) (((u16) ai - (u16) ad_infos) / sizeof(*ai)) 148 149 /* Convert far function address into NPFN (the DDK needs this all over the 150 * place and just casting to NPFN will produce a "segment lost in conversion" 151 * warning. Since casting to a u32 is a bit nasty for function pointers and 152 * might have to be revised for different compilers, we'll use a central 153 * macro for this crap. 154 */ 155 #define mk_NPFN(func) (NPFN) (u32) (func) 156 157 /* stdarg.h macros with explicit far pointers 158 * 159 * NOTE: The compiler pushes fixed arguments with 16 bits minimum, thus 160 * the last fixed argument (i.e. the one passed to va_start) must 161 * have at least 16 bits. Otherwise, the address calculation in 162 * va_start() will fail. 163 */ 164 typedef char _far *va_list; 93 #define ad_no(ai) ( ( (u32)ai - (u32)ad_infos ) / sizeof(*ai)) 94 95 #define MakeNear16PtrFromDiff(Base16, Base32, New32) \ 96 ( ( CastFar16ToULONG(Base16) + ( (ULONG)(New32) - (ULONG)(Base32) ) ) & 0xffff) 97 98 #define MakeFar16PtrFromDiff(Base16, Base32, New32) \ 99 CastULONGToFar16(CastFar16ToULONG(Base16) + ((ULONG)(New32) - (ULONG)(Base32)))) 100 101 /* Takes the selector from the first parameter, and the offset specified 102 * in the second parameter, and returns a flat pointer 103 */ 104 extern void *MakeFlatFromNear16(void __far16 *, USHORT); 105 #pragma aux MakeFlatFromNear16 = \ 106 "mov ax, bx" \ 107 "call Far16ToFlat" \ 108 parm nomemory [eax] [bx] value [eax] modify nomemory exact [eax]; 109 110 /* stdarg.h macros with explicit far pointers */ 111 typedef char *va_list; 165 112 #define va_start(va, last) va = (va_list) (&last + 1) 166 #define va_arg(va, type) ((type _far*) (va += sizeof(type)))[-1]113 #define va_arg(va, type) ((type *) (va += sizeof(type)))[-1] 167 114 #define va_end(va) va = 0 168 115 169 /* ctype macros */170 #define isupper(ch) ((ch) >= 'A' && (ch) <= 'Z')171 #define tolower(ch) (isupper(ch) ? (ch) + ('a' - 'A') : (ch))172 173 116 /* stddef macros */ 174 #define offsetof(s, e) ((u16) &((s *) 0)->e) 175 176 /* SMP spinlock compatibility macros for older DDKs using CLI/STI */ 177 #ifdef SPINLOCK_EMULATION 178 #define DevHelp_CreateSpinLock(p_sph) *(p_sph) = 0 179 #define DevHelp_FreeSpinLock(sph) 0 180 181 #define DevHelp_AcquireSpinLock(sph) if ((sph) != 0) \ 182 panic("recursive spinlock"); \ 183 (sph) = disable() 184 185 #define DevHelp_ReleaseSpinLock(sph) if (sph) { \ 186 (sph) = 0; \ 187 enable(); \ 188 } 189 #endif 117 #define offsetof(s, e) ((u32)&((s *)0)->e) 190 118 191 119 /* shortcut macros */ 192 #define spin_lock(sl) DevHelp_AcquireSpinLock(sl) 193 #define spin_unlock(sl) DevHelp_ReleaseSpinLock(sl) 194 195 /* Get AHCI port MMIO base from AD_INFO and port number. For the time being, 196 * MMIO addresses are assumed to be valid 16:16 pointers which implies 197 * that one GDT selector is allocated per adapter. 198 */ 199 #define port_base(ai, p) ((u8 _far *) (ai)->mmio + 0x100 + (p) * 0x80) 200 201 /* Get address of port-specific DMA scratch buffer. The total size of all DMA 202 * buffers required for 32 ports exceeds 65536 bytes, thus we need multiple 203 * GDT selectors to access all port DMA scratch buffers and some logic to map 204 * a port number to the corresponding DMA scratch buffer address. 205 */ 206 #define PORT_DMA_BUFS_PER_SEG ((size_t) (65536UL / AHCI_PORT_PRIV_DMA_SZ)) 207 #define PORT_DMA_BUF_SEGS ((AHCI_MAX_PORTS + PORT_DMA_BUFS_PER_SEG - 1) \ 208 / PORT_DMA_BUFS_PER_SEG) 209 #define PORT_DMA_SEG_SIZE ((u32) PORT_DMA_BUFS_PER_SEG * \ 210 (u32) AHCI_PORT_PRIV_DMA_SZ) 211 212 #define port_dma_base(ai, p) \ 213 ((AHCI_PORT_DMA _far *) ((ai)->dma_buf[(p) / PORT_DMA_BUFS_PER_SEG] + \ 214 ((p) % PORT_DMA_BUFS_PER_SEG) * AHCI_PORT_PRIV_DMA_SZ)) 215 216 #define port_dma_base_phys(ai, p) \ 217 ((ai)->dma_buf_phys + (u32) (p) * AHCI_PORT_PRIV_DMA_SZ) 120 #define spin_lock(sl) KernAcquireSpinLock(&sl) 121 #define spin_unlock(sl) KernReleaseSpinLock(&sl) 122 123 /* Get AHCI port MMIO base from AD_INFO and port number. */ 124 #define port_base(ai, p) ((u8 *) (ai)->mmio + 0x100 + (p) * 0x80) 125 #define port_dma_base(ai, p) ((AHCI_PORT_DMA *) ((ai)->dma_buf[(p)])) 126 #define port_dma_base_phys(ai, p) ((ai)->dma_buf_phys[(p)]) 218 127 219 128 /* Convert an SATA adapter/port/device address into a 16-bit IORB unit handle … … 229 138 (((u16) (p) & 0x0fU) << 4) | \ 230 139 (((u16) (d) & 0x0fU))) 231 #define iorb_unit_adapter(iorb) ((( u16) (iorb)->UnitHandle >> 8) & 0x07U)232 #define iorb_unit_port(iorb) ((( u16) (iorb)->UnitHandle >> 4) & 0x0fU)233 #define iorb_unit_device(iorb) (( u16) (iorb)->UnitHandle & 0x0fU)140 #define iorb_unit_adapter(iorb) (((iorb)->UnitHandle >> 8) & 0x07) 141 #define iorb_unit_port(iorb) (((iorb)->UnitHandle >> 4) & 0x0f) 142 #define iorb_unit_device(iorb) ((iorb)->UnitHandle & 0x0f) 234 143 235 144 /******************************************************************************* … … 244 153 245 154 /* access IORB ADD workspace */ 246 #define add_workspace(iorb) ((ADD_WORKSPACE _far*) &(iorb)->ADDWorkSpace)155 #define add_workspace(iorb) ((ADD_WORKSPACE *) &(iorb)->ADDWorkSpace) 247 156 248 157 … … 303 212 /* ------------------------ typedefs and structures ------------------------ */ 304 213 305 typedef unsigned int size_t;306 307 typedef struct {308 u32 Start;309 u32 End;310 } TIMER;311 312 214 /* PCI device information structure; this is used both for scanning and for 313 215 * identification purposes in 'AD_INFO'; based on the Linux pci_device_id … … 329 231 */ 330 232 typedef struct { 331 IORBH _far *volatile root; /* root of request list */332 IORBH _far *volatile tail; /* tail of request list */233 IORBH FAR16DATA *volatile vRoot; /* root of request list */ 234 IORBH FAR16DATA *volatile vTail; /* tail of request list */ 333 235 } IORB_QUEUE; 334 236 … … 352 254 353 255 struct { 354 unsigned allocated : 1;/* if != 0, device is allocated */355 unsigned present : 1;/* if != 0, device is present */356 unsigned lba48 : 1;/* if != 0, device supports 48-bit LBA */357 unsigned atapi : 1;/* if != 0, this is an ATAPI device */358 unsigned atapi_16 : 1;/* if != 0, device suports 16-byte cmds */359 unsigned removable : 1;/* if != 0, device has removable media */360 unsigned dev_type : 5;/* device type (UIB_TYPE_* in iorb.h) */361 unsigned ncq_max : 5;/* maximum tag number for queued commands */362 UNITINFO _far*unit_info; /* pointer to modified unit info */363 DEV_INFO 256 unsigned allocated :1; /* if != 0, device is allocated */ 257 unsigned present :1; /* if != 0, device is present */ 258 unsigned lba48 :1; /* if != 0, device supports 48-bit LBA */ 259 unsigned atapi :1; /* if != 0, this is an ATAPI device */ 260 unsigned atapi_16 :1; /* if != 0, device suports 16-byte cmds */ 261 unsigned removable :1; /* if != 0, device has removable media */ 262 unsigned dev_type :5; /* device type (UIB_TYPE_* in iorb.h) */ 263 unsigned ncq_max :5; /* maximum tag number for queued commands */ 264 UNITINFO *unit_info; /* pointer to modified unit info */ 265 DEV_INFO dev_info; 364 266 } devs[AHCI_MAX_DEVS]; 365 267 } P_INFO; … … 391 293 HRESOURCE rm_irq; /* resource handle for IRQ */ 392 294 393 u8 bus; /* PCI bus number */ 394 u8 dev_func; /* PCI device and function number */ 295 u16 bus_dev_func; /* PCI bus number PCI device and function number */ 395 296 u16 irq; /* interrupt number */ 396 297 397 298 u32 mmio_phys; /* physical address of MMIO region */ 398 299 u32 mmio_size; /* size of MMIO region */ 399 u8 _far*mmio; /* pointer to this adapter's MMIO region */400 401 u32 dma_buf_phys ; /* physical address of DMA scratch buffer */402 u8 _far *dma_buf[PORT_DMA_BUF_SEGS]; /* DMA scatch buffer*/300 u8 *mmio; /* pointer to this adapter's MMIO region */ 301 302 u32 dma_buf_phys[AHCI_MAX_PORTS]; /* physical address of DMA scratch buffer */ 303 u8 *dma_buf[AHCI_MAX_PORTS]; /* DMA scatch buffers */ 403 304 404 305 P_INFO ports[AHCI_MAX_PORTS]; /* SATA ports on this adapter */ … … 407 308 /* ADD workspace in IORB (must not exceed 16 bytes) */ 408 309 typedef struct { 409 void (*ppfunc)(IORBH _far *iorb); /*post-processing function */410 void *buf; /* response buffer (e.g. for identify cmds) */411 ULONG timer; /* timer for timeout procesing */412 USHORT blocks; /* number of blocks to be transferred */413 unsigned processing : 1; /*IORB is being processd */414 unsigned idempotent : 1; /*IORB is idempotent (can be retried) */415 unsigned queued_hw : 1; /*IORB has been queued to hardware */416 unsigned no_ncq : 1; /*must not use native command queuing */417 unsigned is_ncq : 1; /*should use native command queueing */418 unsigned complete : 1; /*IORB has completed processing */419 unsigned unaligned : 1; /*unaligned S/G; need to use transfer buffer */420 unsigned retries : 2; /*number of retries for this command */421 unsigned cmd_slot : 5; /*AHCI command slot for this IORB */422 } ADD_WORKSPACE; 310 void (*ppfunc)(IORBH FAR16DATA *vIorb, IORBH *pIorb); /* 00 post-processing function */ 311 void *buf; /* 04 response buffer (e.g. for identify cmds) */ 312 ULONG timer; /* 08 timer for timeout procesing */ 313 USHORT blocks; /* 0c number of blocks to be transferred */ 314 unsigned short processing :1; /* 0e IORB is being processd */ 315 unsigned short idempotent :1; /* IORB is idempotent (can be retried) */ 316 unsigned short queued_hw :1; /* IORB has been queued to hardware */ 317 unsigned short no_ncq :1; /* must not use native command queuing */ 318 unsigned short is_ncq :1; /* should use native command queueing */ 319 unsigned short complete :1; /* IORB has completed processing */ 320 unsigned short unaligned :1; /* unaligned S/G; need to use transfer buffer */ 321 unsigned short retries :2; /* number of retries for this command */ 322 unsigned short cmd_slot :5; /* AHCI command slot for this IORB */ 323 } ADD_WORKSPACE; /* 10 */ 423 324 424 325 /* sg_memcpy() direction */ … … 457 358 /* -------------------------- function prototypes -------------------------- */ 458 359 459 /* init.asm */ 460 extern u32 _cdecl readl (void _far *addr); 461 extern u32 _cdecl writel (void _far *addr, u32 val); 462 extern void _far * _cdecl memcpy (void _far *v_dst, void _far *v_src, int len); 463 extern void _far * _cdecl memset (void _far *p, int ch, size_t len); 464 extern void _cdecl _far restart_hook (void); 465 extern void _cdecl _far reset_hook (void); 466 extern void _cdecl _far engine_hook (void); 467 extern void _cdecl _far asm_krnl_exit (void); 468 extern void _cdecl udelay (u16 microseconds); 360 static inline unsigned long readl(void *a) 361 { 362 return *(volatile unsigned long*)a; 363 } 364 365 static inline void writel(void *a, unsigned long v) 366 { 367 *(volatile unsigned long*)a = v; 368 } 369 370 extern void shutdown_driver(void); 469 371 470 372 /* os2ahci.c */ 471 extern USHORT init_drv (RPINITIN _far*req);472 extern USHORT gen_ioctl (RP_GENIOCTL _far*ioctl);473 extern USHORT char_dev_input (RP_RWV _far*rwrb);474 extern USHORT exit_drv(int func);475 extern USHORT sr_drv(int func);476 extern void _cdecl _far _loadds add_entry (IORBH _far *iorb);477 extern void trigger_engine(void);478 extern int trigger_engine_1(void);479 extern void send_iorb (IORBH _far *iorb);480 extern void iocc_configuration (IORBH _far *iorb);481 extern void iocc_device_control (IORBH _far *iorb);482 extern void iocc_unit_control (IORBH _far *iorb);483 extern void iocm_device_table (IORBH _far *iorb);484 extern void iocc_geometry (IORBH _far *iorb);485 extern void iocc_execute_io (IORBH _far *iorb);486 extern void iocc_unit_status (IORBH _far *iorb);487 extern void iocc_adapter_passthru (IORBH _far *iorb);488 extern void iorb_queue_add (IORB_QUEUE _far *queue, IORBH _far *iorb);489 extern int iorb_queue_del (IORB_QUEUE _far *queue, IORBH _far *iorb);490 extern void iorb_seterr (IORBH _far *iorb, USHORT error_code);491 extern void iorb_done (IORBH _far *iorb);492 extern void iorb_complete (IORBH _far *iorb);493 extern void iorb_requeue (IORBH _far *iorb);494 extern void aws_free (ADD_WORKSPACE _far*aws);495 extern void lock_adapter(AD_INFO *ai);496 extern void unlock_adapter(AD_INFO *ai);497 extern void _ cdecl _far timeout_callback (ULONG timer_handle, ULONG p1, ULONG p2);498 extern void _ cdecl _far reset_watchdog (ULONG timer_handle, ULONG p1, ULONG p2);373 extern USHORT init_drv(REQPACKET *req); 374 extern USHORT gen_ioctl(REQPACKET *ioctl); 375 extern USHORT char_dev_input(REQPACKET *rwrb); 376 extern USHORT exit_drv(int func); 377 extern USHORT sr_drv(int func); 378 extern void add_entry(IORBH FAR16DATA *vIorb); 379 extern void trigger_engine(void); 380 extern int trigger_engine_1(void); 381 extern void send_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb); 382 extern void iocc_configuration (IORBH FAR16DATA *vIorb, IORBH *pIorb); 383 extern void iocc_device_control(IORBH FAR16DATA *vIorb, IORBH *pIorb); 384 extern void iocc_unit_control(IORBH FAR16DATA *vIorb, IORBH *pIorb); 385 extern void iocm_device_table(IORBH FAR16DATA *vIorb, IORBH *pIorb); 386 extern void iocc_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb); 387 extern void iocc_execute_io(IORBH FAR16DATA *vIorb, IORBH *pIorb); 388 extern void iocc_unit_status(IORBH FAR16DATA *vIorb, IORBH *pIorb); 389 extern void iocc_adapter_passthru(IORBH FAR16DATA *vIorb, IORBH *pIorb); 390 extern void iorb_queue_add(IORB_QUEUE *queue, IORBH FAR16DATA *vIorb, IORBH *pIorb); 391 extern int iorb_queue_del(IORB_QUEUE *queue, IORBH FAR16DATA *vIorb); 392 extern void iorb_seterr(IORBH *pIorb, USHORT error_code); 393 extern void iorb_done(IORBH FAR16DATA *vIorb, IORBH *pIorb); 394 extern void iorb_complete(IORBH FAR16DATA *vIorb, IORBH *pIorb); 395 extern void iorb_requeue(IORBH *pIorb); 396 extern void aws_free(ADD_WORKSPACE *aws); 397 extern void lock_adapter(AD_INFO *ai); 398 extern void unlock_adapter(AD_INFO *ai); 399 extern void __syscall timeout_callback(ULONG timer_handle, ULONG p1); 400 extern void __syscall reset_watchdog(ULONG timer_handle, ULONG p1); 499 401 500 402 /* ahci.c */ 501 extern int ahci_save_bios_config (AD_INFO *ai); 502 extern int ahci_restore_bios_config (AD_INFO *ai); 503 extern int ahci_restore_initial_config (AD_INFO *ai); 504 extern AHCI_PORT_CFG *ahci_save_port_config (AD_INFO *ai, int p); 505 extern void ahci_restore_port_config (AD_INFO *ai, int p, 506 AHCI_PORT_CFG *pc); 507 extern int ahci_enable_ahci (AD_INFO *ai); 508 extern int ahci_scan_ports (AD_INFO *ai); 509 extern int ahci_complete_init (AD_INFO *ai); 510 extern int ahci_reset_port (AD_INFO *ai, int p, int ei); 511 extern int ahci_start_port (AD_INFO *ai, int p, int ei); 512 extern void ahci_start_fis_rx (AD_INFO *ai, int p); 513 extern void ahci_start_engine (AD_INFO *ai, int p); 514 extern int ahci_stop_port (AD_INFO *ai, int p); 515 extern int ahci_stop_fis_rx (AD_INFO *ai, int p); 516 extern int ahci_stop_engine (AD_INFO *ai, int p); 517 extern int ahci_port_busy (AD_INFO *ai, int p); 518 extern void ahci_exec_iorb (IORBH _far *iorb, int ncq_capable, 519 int (*func)(IORBH _far *, int)); 520 extern void ahci_exec_polled_iorb (IORBH _far *iorb, 521 int (*func)(IORBH _far *, int), 522 ULONG timeout); 523 extern int ahci_exec_polled_cmd (AD_INFO *ai, int p, int d, 524 int timeout, int cmd, ...); 525 extern int ahci_set_dev_idle (AD_INFO *ai, int p, int d, int idle); 526 extern int ahci_flush_cache (AD_INFO *ai, int p, int d); 527 528 extern int ahci_intr (u16 irq); 529 extern void ahci_port_intr (AD_INFO *ai, int p); 530 extern void ahci_error_intr (AD_INFO *ai, int p, u32 irq_stat); 531 532 extern void ahci_get_geometry (IORBH _far *iorb); 533 extern void ahci_unit_ready (IORBH _far *iorb); 534 extern void ahci_read (IORBH _far *iorb); 535 extern void ahci_verify (IORBH _far *iorb); 536 extern void ahci_write (IORBH _far *iorb); 537 extern void ahci_execute_cdb (IORBH _far *iorb); 538 extern void ahci_execute_ata (IORBH _far *iorb); 403 extern int ahci_save_bios_config(AD_INFO *ai); 404 extern int ahci_restore_bios_config(AD_INFO *ai); 405 extern int ahci_restore_initial_config(AD_INFO *ai); 406 extern AHCI_PORT_CFG *ahci_save_port_config(AD_INFO *ai, int p); 407 extern void ahci_restore_port_config(AD_INFO *ai, int p, AHCI_PORT_CFG *pc); 408 extern int ahci_enable_ahci(AD_INFO *ai); 409 extern int ahci_scan_ports(AD_INFO *ai); 410 extern int ahci_complete_init(AD_INFO *ai); 411 extern int ahci_reset_port(AD_INFO *ai, int p, int ei); 412 extern int ahci_start_port(AD_INFO *ai, int p, int ei); 413 extern void ahci_start_fis_rx(AD_INFO *ai, int p); 414 extern void ahci_start_engine(AD_INFO *ai, int p); 415 extern int ahci_stop_port(AD_INFO *ai, int p); 416 extern int ahci_stop_fis_rx(AD_INFO *ai, int p); 417 extern int ahci_stop_engine(AD_INFO *ai, int p); 418 extern int ahci_port_busy(AD_INFO *ai, int p); 419 extern void ahci_exec_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int ncq_capable, int (*func)(IORBH FAR16DATA *, IORBH *pIorb, int)); 420 extern void ahci_exec_polled_iorb(IORBH FAR16DATA *vIorb, IORBH *pIorb, int (*func)(IORBH FAR16DATA *, IORBH *pIorb, int), ULONG timeout); 421 extern int ahci_exec_polled_cmd(AD_INFO *ai, int p, int d, int timeout, int cmd, ...); 422 extern int ahci_set_dev_idle(AD_INFO *ai, int p, int d, int idle); 423 extern int ahci_flush_cache(AD_INFO *ai, int p, int d); 424 425 extern int ahci_intr(u16 irq); 426 extern void ahci_port_intr(AD_INFO *ai, int p); 427 extern void ahci_error_intr(AD_INFO *ai, int p, u32 irq_stat); 428 429 extern void ahci_get_geometry(IORBH FAR16DATA *vIorb, IORBH *pIorb); 430 extern void ahci_unit_ready(IORBH FAR16DATA *vIorb, IORBH *pIorb); 431 extern void ahci_read(IORBH FAR16DATA *vIorb, IORBH *pIorb); 432 extern void ahci_verify(IORBH FAR16DATA *vIorb, IORBH *pIorb); 433 extern void ahci_write(IORBH FAR16DATA *vIorb, IORBH *pIorb); 434 extern void ahci_execute_cdb(IORBH FAR16DATA *vIorb, IORBH *pIorb); 435 extern void ahci_execute_ata(IORBH FAR16DATA *vIorb, IORBH *pIorb); 539 436 extern void ahci_dump_host_regs(AD_INFO *ai, int bios_regs); 540 437 extern void ahci_dump_port_regs(AD_INFO *ai, int p); 541 438 extern int ahci_reset_controller(AD_INFO *ai); 542 439 543 /* libc.c */ 544 extern void init_libc (void); 545 extern void init_com (long BaudRate); 546 extern int vsprintf (char _far *buf, const char *fmt, va_list va); 547 extern int sprintf (char _far *buf, const char *fmt, ...); 548 extern void vfprintf (const char *fmt, va_list va); 549 extern void _cdecl printf (const char *fmt, ...); 550 extern void _cdecl printf_nts (const char *fmt, ...); 551 extern void cprintf (const char *fmt, ...); 552 extern void phex (const void _far *p, int len, const char *fmt, ...); 553 extern size_t strlen (const char _far *s); 554 extern char _far *strcpy (char _far *dst, const char _far *src); 555 extern int memcmp (void _far *p1, void _far *p2, size_t len); 556 extern void sg_memcpy (SCATGATENTRY _far *sg_list, USHORT sg_cnt, 557 ULONG sg_off, void _far *buf, USHORT len, 558 SG_MEMCPY_DIRECTION dir); 559 extern long strtol (const char _far *buf, 560 const char _far * _far *ep, int base); 561 extern void *malloc (size_t len); 562 extern void free (void *ptr); 563 extern ULONG virt_to_phys (void _far *ptr); 564 extern void msleep (u32 millies); 565 extern void panic (char *msg); 566 extern int disable (void); 567 extern void enable (void); 568 extern void timer_init(TIMER far *pTimer, u32 Milliseconds); 569 extern int timer_check_and_block(TIMER far *pTimer); 440 extern void sg_memcpy(SCATGATENTRY *sg_list, USHORT sg_cnt, ULONG sg_off, void *buf, USHORT len, SG_MEMCPY_DIRECTION dir); 441 extern void panic(char *msg); 570 442 571 443 /* trace.c */ 572 extern void trace_init (u32); 573 extern void trace_exit (void); 574 extern void trace_write (u8 _far *s, int len); 575 extern u16 trace_read (u8 _far *buf, u16 cb_buf); 576 extern u16 trace_char_dev(RP_RWV _far *rwrb); 577 extern void build_user_info(int check); 444 extern void build_user_info(void); 578 445 579 446 /* pci.c */ 580 extern int add_pci_id (u16 vendor, u16 device); 581 extern void scan_pci_bus (void); 582 extern int pci_enable_int (UCHAR bus, UCHAR dev_func); 583 extern void pci_hack_virtualbox(void); 584 extern char *vendor_from_id (u16 vendor); 585 extern char *device_from_id (u16 device); 586 UCHAR pci_read_conf (UCHAR bus, UCHAR dev_func, UCHAR indx, 587 UCHAR size, ULONG _far *val); 588 UCHAR pci_write_conf (UCHAR bus, UCHAR dev_func, UCHAR indx, UCHAR size, 589 ULONG val); 447 extern int add_pci_id(u16 vendor, u16 device); 448 extern void scan_pci_bus(void); 449 extern int pci_enable_int(USHORT BusDevFunc); 450 extern void pci_hack_virtualbox(void); 451 extern char *vendor_from_id(u16 vendor); 452 extern char *device_from_id(u16 device); 590 453 591 454 /* ctxhook.c */ 592 extern void _ cdecl restart_ctxhook(ULONG parm);593 extern void _ cdecl reset_ctxhook(ULONG parm);594 extern void _ cdecl engine_ctxhook(ULONG parm);455 extern void _Syscall restart_ctxhook(ULONG parm); 456 extern void _Syscall reset_ctxhook(ULONG parm); 457 extern void _Syscall engine_ctxhook(ULONG parm); 595 458 596 459 /* apm.c */ 597 extern void apm_init(void);598 extern void suspend(void);599 extern void resume(void);460 extern void apm_init(void); 461 extern void suspend(void); 462 extern void resume(void); 600 463 601 464 /* ioctl.c */ 602 extern USHORT ioctl_get_devlist (RP_GENIOCTL _far*ioctl);603 extern USHORT ioctl_passthrough (RP_GENIOCTL _far*ioctl);604 extern USHORT ioctl_gen_dsk (RP_GENIOCTL _far*ioctl);605 extern USHORT ioctl_smart (RP_GENIOCTL _far*ioctl);465 extern USHORT ioctl_get_devlist(REQPACKET *ioctl); 466 extern USHORT ioctl_passthrough(REQPACKET *ioctl); 467 extern USHORT ioctl_gen_dsk(REQPACKET *ioctl); 468 extern USHORT ioctl_smart(REQPACKET *ioctl); 606 469 607 470 608 471 /* ---------------------------- global variables --------------------------- */ 609 472 610 extern char _cdecl end_of_data; /* label at the end of all data segments */ 611 extern void _cdecl _near end_of_code(); /* label at the end of all code segments */ 612 613 extern int debug; /* if != 0, print debug messages to COM1 */ 614 extern int thorough_scan; /* if != 0, perform thorough PCI scan */ 615 extern int init_reset; /* if != 0, reset ports during init */ 616 extern int force_write_cache; /* if != 0, force write cache */ 617 extern int verbosity; /* if != 0, show some info during boot */ 618 extern int use_lvm_info; 619 extern int wrap_trace_buffer; 620 621 extern HDRIVER rm_drvh; /* resource manager driver handle */ 622 extern USHORT add_handle; /* adapter device driver handle */ 623 extern UCHAR timer_pool[]; /* timer pool */ 624 extern char drv_name[]; /* driver name as string ("OS2AHCI") */ 625 626 extern PCI_ID pci_ids[]; /* SATA adapter PCI IDs */ 627 extern ULONG drv_lock; /* driver-level spinlock */ 628 extern ULONG com_lock; /* debug log spinlock */ 629 extern volatile PGINFOSEG gis; 630 extern IORB_QUEUE driver_queue; /* driver-level IORB queue */ 631 extern AD_INFO ad_infos[]; /* adapter information list */ 632 extern int ad_info_cnt; /* number of entries in ad_infos[] */ 633 extern u16 ad_ignore; /* bitmap with adapters to be ignored */ 634 extern int init_complete; /* if != 0, initialization has completed */ 635 extern int suspended; /* indicates if the driver is suspended */ 636 extern int resume_sleep_flag; 637 638 extern u16 com_base; /* debug COM port base address */ 473 extern int thorough_scan; /* if != 0, perform thorough PCI scan */ 474 extern int init_reset; /* if != 0, reset ports during init */ 475 extern int force_write_cache; /* if != 0, force write cache */ 476 extern int verbosity; /* if != 0, show some info during boot */ 477 extern int use_lvm_info; 478 479 extern HDRIVER rm_drvh; /* resource manager driver handle */ 480 extern USHORT add_handle; /* adapter device driver handle */ 481 extern char drv_name[]; /* driver name as string ("OS2AHCI") */ 482 483 extern PCI_ID pci_ids[]; /* SATA adapter PCI IDs */ 484 extern SpinLock_t drv_lock; /* driver-level spinlock */ 485 extern ULONG com_lock; /* debug log spinlock */ 486 extern IORB_QUEUE driver_queue; /* driver-level IORB queue */ 487 extern AD_INFO ad_infos[]; /* adapter information list */ 488 extern int ad_info_cnt; /* number of entries in ad_infos[] */ 489 extern u16 ad_ignore; /* bitmap with adapters to be ignored */ 490 extern int init_complete; /* if != 0, initialization has completed */ 491 extern int suspended; /* indicates if the driver is suspended */ 492 extern int resume_sleep_flag; 639 493 640 494 /* port restart context hook and input data */ 641 extern ULONG 642 extern volatile u32 495 extern ULONG restart_ctxhook_h; 496 extern volatile u32 ports_to_restart[MAX_AD]; 643 497 644 498 /* port reset context hook and input data */ 645 extern ULONG 646 extern ULONG 647 extern volatile u32 648 extern IORB_QUEUE 499 extern ULONG reset_ctxhook_h; 500 extern ULONG th_reset_watchdog; 501 extern volatile u32 ports_to_reset[MAX_AD]; 502 extern IORB_QUEUE abort_queue; 649 503 650 504 /* trigger engine context hook and input data */ 651 extern ULONG 505 extern ULONG engine_ctxhook_h; 652 506 653 507 /* apapter/port-specific options saved when parsing the command line */ 654 extern u8 emulate_scsi[MAX_AD][AHCI_MAX_PORTS]; 655 extern u8 enable_ncq[MAX_AD][AHCI_MAX_PORTS]; 656 extern u8 link_speed[MAX_AD][AHCI_MAX_PORTS]; 657 extern u8 link_power[MAX_AD][AHCI_MAX_PORTS]; 658 extern u8 track_size[MAX_AD][AHCI_MAX_PORTS]; 659 extern u8 port_ignore[MAX_AD][AHCI_MAX_PORTS]; 660 508 extern u8 emulate_scsi[MAX_AD][AHCI_MAX_PORTS]; 509 extern u8 enable_ncq[MAX_AD][AHCI_MAX_PORTS]; 510 extern u8 link_speed[MAX_AD][AHCI_MAX_PORTS]; 511 extern u8 link_power[MAX_AD][AHCI_MAX_PORTS]; 512 extern u8 track_size[MAX_AD][AHCI_MAX_PORTS]; 513 extern u8 port_ignore[MAX_AD][AHCI_MAX_PORTS]; 514 515 #ifdef DEBUG 516 extern void DumpIorb(IORBH *pIorb); 517 #endif 518 -
trunk/src/os2ahci/pci.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-201 5David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 33 33 #define PCI_BAR(reg) (UCHAR) (0x10 + (reg) * sizeof(u32)) 34 34 35 /******************************************************************************36 * OEMHLP constants for PCI access37 */38 #define GENERIC_IOCTL 0x1039 #define OH_CATEGORY 0x0040 #define OH_FUNC_PCI 0x0b41 42 /* subfunctions */43 #define OH_BIOS_INFO 0x0044 #define OH_FIND_DEVICE 0x0145 #define OH_FIND_CLASS 0x0246 #define OH_READ_CONFIG 0x0347 #define OH_WRITE_CONFIG 0x0448 49 /* return codes */50 #define OH_SUCCESS 0x0051 #define OH_NOT_SUPPORTED 0x8152 #define OH_BAD_VENDOR 0x8353 #define OH_NOT_FOUND 0x8654 #define OH_BAD_REGISTER 0x8755 56 35 /* ------------------------ typedefs and structures ------------------------ */ 57 36 58 /******************************************************************************59 * OEMHLP IOCtl parameter union. The parameter area is generally used as input60 * to the OEMHLP IOCtl calls.61 */62 typedef union {63 64 /* query PCI BIOS information" */65 struct {66 UCHAR subfunction;67 } bios_info;68 69 /* find PCI device */70 struct {71 UCHAR subfunction;72 USHORT device;73 USHORT vendor;74 UCHAR index;75 } find_device;76 77 /* find PCI class code */78 struct {79 UCHAR subfunction;80 ULONG class;81 UCHAR index;82 } find_class;83 84 /* read PCI configuration space */85 struct {86 UCHAR subfunction;87 UCHAR bus;88 UCHAR dev_func;89 UCHAR reg;90 UCHAR size;91 } read_config;92 93 /* write PCI configuration space */94 struct {95 UCHAR subfunction;96 UCHAR bus;97 UCHAR dev_func;98 UCHAR reg;99 UCHAR size;100 ULONG data;101 } write_config;102 103 } OH_PARM;104 105 /******************************************************************************106 * OEMHLP IOCtl data union. The data area is generally used as output from the107 * OEMHLP IOCtl calls.108 */109 typedef union {110 111 /* query PCI BIOS information" */112 struct {113 UCHAR rc;114 UCHAR hw_mech;115 UCHAR major_version;116 UCHAR minor_version;117 UCHAR last_bus;118 } bios_info;119 120 /* find PCI device */121 struct {122 UCHAR rc;123 UCHAR bus;124 UCHAR dev_func;125 } find_device;126 127 /* find PCI class code */128 struct {129 UCHAR rc;130 UCHAR bus;131 UCHAR dev_func;132 } find_class;133 134 /* read PCI confguration space */135 struct {136 UCHAR rc;137 ULONG data;138 } read_config;139 140 /* write PCI confguration space */141 struct {142 UCHAR rc;143 } write_config;144 145 } OH_DATA;146 147 37 /* -------------------------- function prototypes -------------------------- */ 148 38 149 static void add_pci_device (PCI_ID *pci_id, OH_DATA _far *data); 150 static int oemhlp_call (UCHAR subfunction, OH_PARM _far *parm, 151 OH_DATA _far *data); 152 static long bar_resource (UCHAR bus, UCHAR dev_func, 153 RESOURCESTRUCT _far *resource, int i); 154 static char *rmerr (APIRET ret); 39 static void add_pci_device(PCI_ID *pci_id, USHORT BusDevFunc); 40 static long bar_resource(USHORT BusDevFunc, RESOURCESTRUCT *resource, int i); 41 static char *rmerr(APIRET ret); 155 42 156 43 /* ------------------------ global/static variables ------------------------ */ … … 159 46 * chipset/controller name strings 160 47 */ 161 static char chip_esb2[] = "ESB2"; 162 static char chip_ich8[] = "ICH8"; 163 static char chip_ich8m[] = "ICH8M"; 164 static char chip_ich9[] = "ICH9"; 165 static char chip_ich9m[] = "ICH9M"; 166 static char chip_ich10[] = "ICH10"; 167 static char chip_pchahci[] = "PCH AHCI"; 168 static char chip_pchraid[] = "PCH RAID"; 169 static char chip_tolapai[] = "Tolapai"; 170 static char chip_sb600[] = "SB600"; 171 static char chip_sb700[] = "SB700/800"; 172 static char chip_vt8251[] = "VT8251"; 173 static char chip_mcp65[] = "MCP65"; 174 static char chip_mcp67[] = "MCP67"; 175 static char chip_mcp73[] = "MCP73"; 176 static char chip_mcp77[] = "MCP77"; 177 static char chip_mcp79[] = "MCP79"; 178 static char chip_mcp89[] = "MCP689"; 179 static char chip_sis968[] = "968"; 180 181 static char s_generic[] = "Generic"; 182 48 static char chip_esb2[] = "ESB2"; 49 static char chip_ich8[] = "ICH8"; 50 static char chip_ich8m[] = "ICH8M"; 51 static char chip_ich9[] = "ICH9"; 52 static char chip_ich9m[] = "ICH9M"; 53 static char chip_ich10[] = "ICH10"; 54 static char chip_pchahci[] = "PCH AHCI"; 55 static char chip_pchraid[] = "PCH RAID"; 56 static char chip_tolapai[] = "Tolapai"; 57 static char chip_sb600[] = "SB600"; 58 static char chip_sb700[] = "SB700/800"; 59 static char chip_vt8251[] = "VT8251"; 60 static char chip_mcp65[] = "MCP65"; 61 static char chip_mcp67[] = "MCP67"; 62 static char chip_mcp73[] = "MCP73"; 63 static char chip_mcp77[] = "MCP77"; 64 static char chip_mcp79[] = "MCP79"; 65 static char chip_mcp89[] = "MCP689"; 66 static char chip_sis968[] = "968"; 67 68 static char s_generic[] = "Generic"; 183 69 184 70 … … 188 74 */ 189 75 190 PCI_ID pci_ids[] = {191 76 PCI_ID pci_ids[] = 77 { 192 78 /* Intel 193 79 * NOTE: ICH5 controller does NOT support AHCI, so we do … … 359 245 }; 360 246 361 /******************************************************************************362 * OEMHLP$ is used by OS/2 to provide access to OEM-specific machine resources363 * like PCI BIOS access. We're using this to enumerate the PCI bus. Due to364 * BIOS bugs, it may be necessary to use I/O operations for this purpose but365 * so far I think this is only relevant for rather old PCs and SATA is not366 * expected to be a priority on those machines.367 */368 static IDCTABLE oemhlp; /* OEMHLP$ IDC entry point */369 370 247 /* ----------------------------- start of code ----------------------------- */ 371 248 … … 382 259 /* search for last used slot in 'pci_ids' */ 383 260 for (i = max_slot; i >= 0 && pci_ids[i].vendor == 0; i--); 384 if (i >= max_slot) { 385 /* all slots in use */ 386 return(-1); 387 } 261 if (i >= max_slot) return(-1); /* all slots in use */ 388 262 389 263 /* use slot after the last used slot */ … … 401 275 void scan_pci_bus(void) 402 276 { 403 OH_PARM parm;404 OH_DATA data;405 277 UCHAR index; 406 UCHAR rc;407 278 int ad_indx = 0; 408 279 int i; 409 280 int n; 410 411 ddprintf("scanning PCI bus...\n"); 412 413 /* verify that we have a PCI system */ 414 memset(&parm, 0x00, sizeof(parm)); 415 if (oemhlp_call(OH_BIOS_INFO, &parm, &data) != OH_SUCCESS) { 416 cprintf("%s: couldn't get PCI BIOS information\n", drv_name); 417 return; 418 } 281 USHORT BusDevFunc; 282 283 DPRINTF(3,"scanning PCI bus...\n"); 419 284 420 285 /* Go through the list of PCI IDs and search for each device … … 434 299 * adapters have the correct class code (PCI_CLASS_STORAGE_SATA_AHCI). 435 300 */ 436 for (i = 0; pci_ids[i].vendor != 0; i++) { 301 for (i = 0; pci_ids[i].vendor != 0; i++) 302 { 437 303 index = 0; 438 do { 439 if (pci_ids[i].device == PCI_ANY_ID || pci_ids[i].vendor == PCI_ANY_ID) { 304 do 305 { 306 if (pci_ids[i].device == PCI_ANY_ID || pci_ids[i].vendor == PCI_ANY_ID) 307 { 440 308 /* look for class code */ 441 memset(&parm, 0x00, sizeof(parm)); 442 parm.find_class.class = pci_ids[i].class; 443 parm.find_class.index = index; 444 rc = oemhlp_call(OH_FIND_CLASS, &parm, &data); 445 446 } else if (thorough_scan) { 309 BusDevFunc = PciFindClass(pci_ids[i].class, index); 310 } 311 else if (thorough_scan) 312 { 447 313 /* look for this specific vendor and device ID */ 448 memset(&parm, 0x00, sizeof(parm)); 449 parm.find_device.device = pci_ids[i].device; 450 parm.find_device.vendor = pci_ids[i].vendor; 451 parm.find_device.index = index; 452 rc = oemhlp_call(OH_FIND_DEVICE, &parm, &data); 453 454 } else { 455 rc = OH_NOT_FOUND; 314 BusDevFunc = PciFindDevice( pci_ids[i].vendor, pci_ids[i].device, index); 315 456 316 } 457 458 if (rc == OH_SUCCESS) { 317 else 318 { 319 BusDevFunc = 0xffff; 320 } 321 322 if (BusDevFunc != 0xffff) 323 { 459 324 /* found a device */ 460 325 int already_found = 0; 461 326 462 327 /* increment index for next loop */ 463 if (++index > 180) { 464 /* something's wrong here... */ 465 return; 466 } 328 if (++index > 180) return; /* something's wrong here... */ 467 329 468 330 /* check whether we already found this device */ 469 for (n = 0; n < ad_info_cnt; n++) { 470 if (ad_infos[n].bus == data.find_device.bus && 471 ad_infos[n].dev_func == data.find_device.dev_func) { 331 for (n = 0; n < ad_info_cnt; n++) 332 { 333 if (ad_infos[n].bus_dev_func == BusDevFunc) 334 { 472 335 /* this device has already been found (e.g. via thorough scan) */ 473 336 already_found = 1; … … 476 339 } 477 340 478 if (already_found || (ad_ignore & (1U << ad_indx++))) { 341 if (already_found || (ad_ignore & (1U << ad_indx++))) 342 { 479 343 /* ignore this device; it has either already been found via a 480 344 * thorough scan or has been specified to be ignored via command … … 484 348 485 349 /* add this PCI device to ad_infos[] */ 486 add_pci_device(pci_ids + i, &data);350 add_pci_device(pci_ids + i, BusDevFunc); 487 351 } 488 352 489 } while ( rc == OH_SUCCESS);353 } while (BusDevFunc != 0xffff); 490 354 } 491 355 } … … 496 360 * bit in the configuration space command register. 497 361 */ 498 int pci_enable_int(U CHAR bus, UCHAR dev_func)362 int pci_enable_int(USHORT BusDevFunc) 499 363 { 500 364 ULONG tmp; 501 365 502 if (pci_read_conf (bus, dev_func, 4, sizeof(u32), &tmp) != OH_SUCCESS || 503 pci_write_conf(bus, dev_func, 4, sizeof(u32), tmp & ~(1UL << 10)) != OH_SUCCESS) { 366 if (PciReadConfig(BusDevFunc, 4, sizeof(tmp), &tmp) || 367 PciWriteConfig(BusDevFunc, 4, sizeof(tmp), tmp & ~(1UL << 10))) 368 { 504 369 return(-1); 505 370 } … … 514 379 void pci_hack_virtualbox(void) 515 380 { 516 ULONG irq = 0; 517 518 if (pci_read_conf(0, 0x08, 0x60, 1, &irq) == OH_SUCCESS && irq == 0x80) { 381 UCHAR irq = 0; 382 383 if (!PciReadConfig(0x0008, 0x60, sizeof(irq), &irq) && irq == 0x80) 384 { 519 385 /* set IRQ for first device/func to 11 */ 520 dprintf("hacking virtualbox PIIX3 PCI to ISA bridge IRQ mapping\n");386 DPRINTF(1,"hacking virtualbox PIIX3 PCI to ISA bridge IRQ mapping\n"); 521 387 irq = ad_infos[0].irq; 522 pci_write_conf(0, 0x08, 0x60, 1, irq);388 PciWriteConfig(0x0008, 0x60, sizeof(irq), irq); 523 389 } 524 390 } … … 527 393 * Add a single PCI device to the list of adapters. 528 394 */ 529 static void add_pci_device(PCI_ID *pci_id, OH_DATA _far *data)395 static void add_pci_device(PCI_ID *pci_id, USHORT BusDevFunc) 530 396 { 531 397 char rc_list_buf[sizeof(AHRESOURCE) + sizeof(HRESOURCE) * 15]; 532 AHRESOURCE _far *rc_list = (AHRESOURCE _far*) rc_list_buf;398 AHRESOURCE *rc_list = (AHRESOURCE *) rc_list_buf; 533 399 RESOURCESTRUCT resource; 534 400 ADAPTERSTRUCT adapter; … … 536 402 AD_INFO *ad_info; 537 403 APIRET ret; 538 UCHAR bus = data->find_class.bus;539 UCHAR dev_func = data->find_class.dev_func;540 404 ULONG val; 541 SEL gdt[PORT_DMA_BUF_SEGS + 1];542 405 char tmp[40]; 543 406 u16 device; … … 551 414 * Part 1: Get further information about the device to be added; PCI ID... 552 415 */ 553 if (pci_read_conf(bus, dev_func, 0x00, sizeof(ULONG), &val) != OH_SUCCESS) { 554 return; 555 } 556 device = (u16) (val >> 16); 557 vendor = (u16) (val & 0xffff); 416 if (PciReadConfig(BusDevFunc, 0x00, sizeof(ULONG), &val)) return; 417 device = (val >> 16); 418 vendor = (val & 0xffff); 558 419 559 420 /* ... and class code */ 560 if (pci_read_conf(bus, dev_func, 0x08, sizeof(ULONG), &val) != OH_SUCCESS) { 561 return; 562 } 563 class = (u32) (val >> 8); 564 565 if (pci_id->device == PCI_ANY_ID) { 421 if (PciReadConfig(BusDevFunc, 0x08, sizeof(ULONG), &val)) return; 422 class = (val >> 8); 423 424 if (pci_id->device == PCI_ANY_ID) 425 { 566 426 /* We found this device in a wildcard search. There are two possible 567 427 * reasons which require a different handling: … … 587 447 * based scan (unless overridden with the "/T" option). 588 448 */ 589 if (pci_id->vendor != PCI_ANY_ID) { 449 if (pci_id->vendor != PCI_ANY_ID) 450 { 590 451 /* case 1: the vendor is known but we found the PCI device using a class 591 452 * search; verify vendor matches the one in pci_ids[] 592 453 */ 593 if (pci_id->vendor != vendor) { 594 /* vendor doesn't match */ 595 return; 596 } 597 598 } else { 454 if (pci_id->vendor != vendor) return; /* vendor doesn't match */ 455 } 456 else 457 { 599 458 /* case 2: we found this device using a generic class search; if the 600 459 * device/vendor is listed in pci_ids[], use this entry in favor of the 601 460 * one passed in 'pci_id' 602 461 */ 603 for (i = 0; pci_ids[i].vendor != 0; i++) { 604 if (pci_ids[i].device == device && pci_ids[i].vendor == vendor) { 462 for (i = 0; pci_ids[i].vendor != 0; i++) 463 { 464 if (pci_ids[i].device == device && pci_ids[i].vendor == vendor) 465 { 605 466 pci_id = pci_ids + i; 606 467 break; … … 611 472 612 473 /* found a supported AHCI device */ 613 ciiprintf("Found AHCI device %s %s (%d:%d:%d %04x:%04x) class:0x%06lx\n", 614 vendor_from_id(vendor), device_from_id(device), 615 bus, dev_func>>3, dev_func&7, vendor, device, class); 474 475 if (PciReadConfig(BusDevFunc, 0x3c, sizeof(u32), &val)) return; 476 irq = (int) (val & 0xff); 477 pin = (int) ((val >> 8) & 0xff); 478 479 i = 1; 480 if (irq==0 || irq==255) i = 0; 481 482 if (verbosity > i) 483 { 484 iprintf("%s AHCI device %s %s (%d:%d:%d %04x:%04x) class:0x%06x", i?"Found":"Ignoring", 485 vendor_from_id(vendor), device_from_id(device), 486 PCI_BUS_FROM_BDF(BusDevFunc), PCI_DEV_FROM_BDF(BusDevFunc), PCI_FUNC_FROM_BDF(BusDevFunc), 487 vendor, device, class); 488 if (i==0) iprintf("Invalid interrupt (IRQ=%d).", irq); 489 } 490 if (i==0) return; 616 491 617 492 /* make sure we got room in the adapter information array */ 618 if (ad_info_cnt >= MAX_AD - 1) { 619 cprintf("%s: too many AHCI devices\n", drv_name); 493 if (ad_info_cnt >= MAX_AD - 1) 494 { 495 iprintf("%s: too many AHCI devices", drv_name); 620 496 return; 621 497 } … … 637 513 rc_list->NumResource = 0; 638 514 639 /* Register IRQ with resource manager 640 * 641 * NOTE: We rely on the IRQ number saved in the PCI config space by the PCI 642 * BIOS. There's no reliable way to find out the IRQ number in any 643 * other way unless we start using message-driven interrupts (which 644 * is out of scope for the time being). 645 */ 646 if (pci_read_conf(bus, dev_func, 0x3c, sizeof(u32), &val) != OH_SUCCESS) { 647 return; 648 } 649 irq = (int) (val & 0xff); 650 pin = (int) ((val >> 8) & 0xff); 651 652 memset(&resource, 0x00, sizeof(resource)); 653 resource.ResourceType = RS_TYPE_IRQ; 654 resource.IRQResource.IRQLevel = irq; 655 resource.IRQResource.PCIIrqPin = pin; 656 resource.IRQResource.IRQFlags = RS_IRQ_SHARED; 657 658 ret = RMAllocResource(rm_drvh, &ad_info->rm_irq, &resource); 659 if (ret != RMRC_SUCCESS) { 660 cprintf("%s: couldn't register IRQ %d (rc = %s)\n", drv_name, irq, rmerr(ret)); 515 /* Register IRQ with resource manager */ 516 ret = RmAddIrq(rm_drvh, &ad_info->rm_irq, irq, pin); 517 if (ret) 518 { 519 if (ret == RMRC_RES_ALREADY_CLAIMED) 520 { 521 ciiprintf("Device already claimed."); 522 } 523 else 524 { 525 iprintf("%s: couldn't register IRQ %d (rc = %s)", drv_name, irq, rmerr(ret)); 526 } 661 527 return; 662 528 } … … 669 535 */ 670 536 671 ddprintf("Adapter %d PCI=%d:%d:%d ID=%04x:%04x\n", ad_info_cnt, bus, dev_func>>3, dev_func&7, vendor, device); 672 673 for (i = 0; i < sizeof(ad_info->rm_bars) / sizeof(*ad_info->rm_bars); i++) { 674 long len = bar_resource(bus, dev_func, &resource, i); 675 676 if (len < 0) { 537 DPRINTF(1,"Adapter %d PCI=%d:%d:%d ID=%04x:%04x\n", ad_info_cnt, PCI_BUS_FROM_BDF(BusDevFunc), 538 PCI_DEV_FROM_BDF(BusDevFunc), PCI_FUNC_FROM_BDF(BusDevFunc), vendor, device); 539 540 for (i = 0; i < sizeof(ad_info->rm_bars) / sizeof(*ad_info->rm_bars); i++) 541 { 542 long len = bar_resource(BusDevFunc, &resource, i); 543 544 if (len < 0) 545 { 677 546 /* something went wrong */ 678 547 goto add_pci_fail; 679 548 } 680 if (len == 0) { 549 if (len == 0) 550 { 681 551 /* this BAR is unused */ 682 552 continue; 683 553 } 684 554 685 if (i == AHCI_PCI_BAR) { 686 if (resource.ResourceType != RS_TYPE_MEM) { 687 cprintf("%s: BAR #5 must be an MMIO region\n", drv_name); 555 if (i == AHCI_PCI_BAR) 556 { 557 if (resource.ResourceType != RS_TYPE_MEM) 558 { 559 iprintf("%s: BAR #5 must be an MMIO region", drv_name); 688 560 goto add_pci_fail; 689 561 } … … 695 567 /* register [MM]IO region with resource manager */ 696 568 ret = RMAllocResource(rm_drvh, ad_info->rm_bars + i, &resource); 697 if (ret != RMRC_SUCCESS) { 698 cprintf("%s: couldn't register [MM]IO region (rc = %s)\n", 699 drv_name, rmerr(ret)); 569 if (ret != RMRC_SUCCESS) 570 { 571 if (ret == RMRC_RES_ALREADY_CLAIMED) 572 { 573 ciiprintf("Device already claimed."); 574 } 575 else 576 { 577 iprintf("%s: couldn't register [MM]IO region (rc = %s)", drv_name, rmerr(ret)); 578 } 700 579 goto add_pci_fail; 701 580 } … … 703 582 } 704 583 705 if (ad_info->mmio_phys == 0) { 706 cprintf("%s: couldn't determine MMIO base address\n", drv_name); 584 if (ad_info->mmio_phys == 0) 585 { 586 iprintf("%s: couldn't determine MMIO base address", drv_name); 707 587 goto add_pci_fail; 708 588 } … … 716 596 ad_info->pci_vendor = vendor; 717 597 ad_info->pci_device = device; 718 ad_info->bus = bus; 719 ad_info->dev_func = dev_func; 598 ad_info->bus_dev_func = BusDevFunc; 720 599 ad_info->irq = irq; 721 600 722 /* allocate memory for port-specific DMA scratch buffers */ 723 if (DevHelp_AllocPhys((long) AHCI_PORT_PRIV_DMA_SZ * AHCI_MAX_PORTS, 724 MEMTYPE_ABOVE_1M, &ad_info->dma_buf_phys) != 0) { 725 cprintf("%s: couldn't allocate DMA scratch buffers for AHCI ports\n", drv_name); 726 ad_info->dma_buf_phys = 0; 727 goto add_pci_fail; 728 } 729 730 /* allocate GDT selectors for memory-mapped I/O and DMA scratch buffers */ 731 if (DevHelp_AllocGDTSelector(gdt, PORT_DMA_BUF_SEGS + 1) != 0) { 732 cprintf("%s: couldn't allocate GDT selectors\n", drv_name); 733 memset(gdt, 0x00, sizeof(gdt)); 734 goto add_pci_fail; 735 } 736 737 /* map MMIO address to first GDT selector */ 738 if (DevHelp_PhysToGDTSelector(ad_info->mmio_phys, 739 (USHORT) ad_info->mmio_size, gdt[0]) != 0) { 740 cprintf("%s: couldn't map MMIO address to GDT selector\n", drv_name); 741 goto add_pci_fail; 742 } 743 744 /* map DMA scratch buffers to remaining GDT selectors */ 745 for (i = 0; i < PORT_DMA_BUF_SEGS; i++) { 746 ULONG addr = ad_info->dma_buf_phys + i * PORT_DMA_SEG_SIZE; 747 USHORT len = AHCI_PORT_PRIV_DMA_SZ * PORT_DMA_BUFS_PER_SEG; 748 749 if (DevHelp_PhysToGDTSelector(addr, len, gdt[i+1]) != 0) { 750 cprintf("%s: couldn't map DMA scratch buffer to GDT selector\n", drv_name); 751 goto add_pci_fail; 752 } 753 } 754 755 /* fill in MMIO and DMA scratch buffer addresses in adapter info */ 756 ad_info->mmio = (u8 _far *) ((u32) gdt[0] << 16); 757 for (i = 0; i < PORT_DMA_BUF_SEGS; i++) { 758 ad_info->dma_buf[i] = (u8 _far *) ((u32) gdt[i+1] << 16); 601 ad_info->mmio = MapPhysToLin(ad_info->mmio_phys, ad_info->mmio_size); 602 if (!ad_info->mmio) goto add_pci_fail; 603 604 /* fill in DMA scratch buffer addresses in adapter info */ 605 for (i = 0; i < AHCI_MAX_PORTS; i++) 606 { 607 ad_info->dma_buf[i] = MemAlloc(AHCI_PORT_PRIV_DMA_SZ); 608 ad_info->dma_buf_phys[i] = MemPhysAdr(ad_info->dma_buf[i]); 759 609 } 760 610 761 611 /* register adapter with resource manager */ 762 612 memset(&adj, 0x00, sizeof(adj)); 763 adj.pNextAdj 764 adj.AdjLength 765 adj.AdjType 766 adj.Adapter_Number 613 adj.pNextAdj = NULL; 614 adj.AdjLength = sizeof(adj); 615 adj.AdjType = ADJ_ADAPTER_NUMBER; 616 adj.Adapter_Number = ad_info_cnt; 767 617 768 618 memset(&adapter, 0x00, sizeof(adapter)); … … 778 628 779 629 ret = RMCreateAdapter(rm_drvh, &ad_info->rm_adh, &adapter, NULL, rc_list); 780 if (ret != RMRC_SUCCESS) { 781 cprintf("%s: couldn't register adapter (rc = %s)\n", drv_name, rmerr(ret)); 630 if (ret != RMRC_SUCCESS) 631 { 632 iprintf("%s: couldn't register adapter (rc = %s)", drv_name, rmerr(ret)); 782 633 goto add_pci_fail; 783 634 } … … 790 641 return; 791 642 643 792 644 add_pci_fail: 793 645 /* something went wrong; try to clean up as far as possible */ 794 for (i = 0; i < sizeof(ad_info->rm_bars) / sizeof(*ad_info->rm_bars); i++) { 795 if (ad_info->rm_bars[i] != 0) { 796 RMDeallocResource(rm_drvh, ad_info->rm_bars[i]); 797 } 798 } 799 if (ad_info->rm_irq != 0) { 800 RMDeallocResource(rm_drvh, ad_info->rm_irq); 801 } 802 for (i = 0; i < sizeof(gdt) / sizeof(*gdt); i++) { 803 if (gdt[i] != 0) { 804 DevHelp_FreeGDTSelector(gdt[i]); 805 } 806 } 807 if (ad_info->dma_buf_phys != 0) { 808 DevHelp_FreePhys(ad_info->dma_buf_phys); 809 } 810 } 811 812 /****************************************************************************** 813 * Read PCI configuration space register 814 */ 815 UCHAR pci_read_conf(UCHAR bus, UCHAR dev_func, UCHAR indx, UCHAR size, 816 ULONG _far *val) 817 { 818 OH_PARM parm; 819 OH_DATA data; 820 UCHAR rc; 821 822 memset(&parm, 0x00, sizeof(parm)); 823 parm.read_config.bus = bus; 824 parm.read_config.dev_func = dev_func; 825 parm.read_config.reg = indx; 826 parm.read_config.size = size; 827 if ((rc = oemhlp_call(OH_READ_CONFIG, &parm, &data) != OH_SUCCESS)) { 828 cprintf("%s: couldn't read config space (bus = %d, dev_func = 0x%02x, indx = 0x%02x, rc = %d)\n", 829 drv_name, bus, dev_func, indx, rc); 830 return(rc); 831 } 832 833 *val = data.read_config.data; 834 return(OH_SUCCESS); 835 } 836 837 /****************************************************************************** 838 * Write PCI configuration space register 839 */ 840 UCHAR pci_write_conf(UCHAR bus, UCHAR dev_func, UCHAR indx, UCHAR size, 841 ULONG val) 842 { 843 OH_PARM parm; 844 OH_DATA data; 845 UCHAR rc; 846 847 memset(&parm, 0x00, sizeof(parm)); 848 parm.write_config.bus = bus; 849 parm.write_config.dev_func = dev_func; 850 parm.write_config.reg = indx; 851 parm.write_config.size = size; 852 parm.write_config.data = val; 853 854 if ((rc = oemhlp_call(OH_WRITE_CONFIG, &parm, &data) != OH_SUCCESS)) { 855 cprintf("%s: couldn't write config space (bus = %d, dev_func = 0x%02x, indx = 0x%02x, rc = %d)\n", 856 drv_name, bus, dev_func, indx, rc); 857 return(rc); 858 } 859 860 return(OH_SUCCESS); 861 } 862 /****************************************************************************** 863 * Call OEMHLP$ IDC entry point with the specified IOCtl parameter and data 864 * packets. 865 */ 866 static int oemhlp_call(UCHAR subfunction, OH_PARM _far *parm, 867 OH_DATA _far *data) 868 { 869 void (_far *func)(void); 870 RP_GENIOCTL ioctl; 871 unsigned short prot_idc_ds; 872 873 if (oemhlp.ProtIDCEntry == NULL || oemhlp.ProtIDC_DS == 0) { 874 /* attach to OEMHLP$ device driver */ 875 if (DevHelp_AttachDD("OEMHLP$ ", (NPBYTE) &oemhlp) || 876 oemhlp.ProtIDCEntry == NULL || 877 oemhlp.ProtIDC_DS == 0) { 878 cprintf("%s: couldn't attach to OEMHLP$\n", drv_name); 879 return(OH_NOT_SUPPORTED); 880 } 881 } 882 883 /* store subfuntion in first byte of pararameter packet */ 884 parm->bios_info.subfunction = subfunction; 885 memset(data, 0x00, sizeof(*data)); 886 887 /* assemble IOCtl request */ 888 memset(&ioctl, 0x00, sizeof(ioctl)); 889 ioctl.rph.Len = sizeof(ioctl); 890 ioctl.rph.Unit = 0; 891 ioctl.rph.Cmd = GENERIC_IOCTL; 892 ioctl.rph.Status = 0; 893 894 ioctl.Category = OH_CATEGORY; 895 ioctl.Function = OH_FUNC_PCI; 896 ioctl.ParmPacket = (PUCHAR) parm; 897 ioctl.DataPacket = (PUCHAR) data; 898 ioctl.ParmLen = sizeof(*parm); 899 ioctl.DataLen = sizeof(*data); 900 901 /* Call OEMHLP's IDC routine. Before doing so, we need to assign the address 902 * to be called to a stack variable because the inter-device driver calling 903 * convention forces us to set DS to the device driver's data segment and ES 904 * to the segment of the request packet. 905 */ 906 func = oemhlp.ProtIDCEntry; 907 908 /* The WATCOM compiler does not support struct references in inline 909 * assembler code, so we pass it in a stack variable 910 */ 911 prot_idc_ds = oemhlp.ProtIDC_DS; 912 913 _asm { 914 push ds; 915 push es; 916 push bx; 917 push si; 918 push di; 919 920 push ss 921 pop es 922 lea bx, ioctl; 923 mov ds, prot_idc_ds; 924 call dword ptr [func]; 925 926 pop di; 927 pop si; 928 pop bx; 929 pop es; 930 pop ds; 931 } 932 933 //dddphex(parm, sizeof(*parm), "oemhlp_parm: "); 934 //dddphex(data, sizeof(*data), "oemhlp_data: "); 935 936 if (ioctl.rph.Status & STERR) { 937 return(OH_NOT_SUPPORTED); 938 } 939 return(data->bios_info.rc); 646 for (i = 0; i < sizeof(ad_info->rm_bars) / sizeof(*ad_info->rm_bars); i++) 647 { 648 if (ad_info->rm_bars[i] != 0) RMDeallocResource(rm_drvh, ad_info->rm_bars[i]); 649 } 650 if (ad_info->rm_irq != 0) RMDeallocResource(rm_drvh, ad_info->rm_irq); 940 651 } 941 652 … … 960 671 * I = I/O (1) or memory (0) 961 672 */ 962 static long bar_resource(UCHAR bus, UCHAR dev_func, 963 RESOURCESTRUCT _far *resource, int i) 673 static long bar_resource(USHORT BusDevFunc, RESOURCESTRUCT *resource, int i) 964 674 { 965 675 u32 bar_addr = 0; … … 967 677 968 678 /* temporarily write 1s to this BAR to determine the address range */ 969 if (pci_read_conf (bus, dev_func, PCI_BAR(i), sizeof(u32), &bar_addr) != OH_SUCCESS || 970 pci_write_conf(bus, dev_func, PCI_BAR(i), sizeof(u32), ~(0UL)) != OH_SUCCESS || 971 pci_read_conf (bus, dev_func, PCI_BAR(i), sizeof(u32), &bar_size) != OH_SUCCESS || 972 pci_write_conf(bus, dev_func, PCI_BAR(i), sizeof(u32), bar_addr) != OH_SUCCESS) { 973 974 cprintf("%s: couldn't determine [MM]IO size\n", drv_name); 975 if (bar_addr != 0) { 976 pci_write_conf(bus, dev_func, PCI_BAR(i), sizeof(u32), bar_addr); 679 if (PciReadConfig (BusDevFunc, PCI_BAR(i), sizeof(u32), &bar_addr) || 680 PciWriteConfig(BusDevFunc, PCI_BAR(i), sizeof(u32), ~(0UL)) || 681 PciReadConfig (BusDevFunc, PCI_BAR(i), sizeof(u32), &bar_size) || 682 PciWriteConfig(BusDevFunc, PCI_BAR(i), sizeof(u32), bar_addr) ) 683 { 684 iprintf("%s: couldn't determine [MM]IO size", drv_name); 685 if (bar_addr != 0) 686 { 687 PciWriteConfig(BusDevFunc, PCI_BAR(i), sizeof(u32), bar_addr); 977 688 } 978 689 return(-1); 979 690 } 980 691 981 if (bar_size == 0 || bar_size == 0xffffffffUL) { 982 /* bar not implemented or device not working properly */ 983 return(0); 984 } 692 /* bar not implemented or device not working properly */ 693 if (bar_size == 0 || bar_size == 0xffffffffUL) return(0); 985 694 986 695 /* prepare resource allocation structure */ 987 696 memset(resource, 0x00, sizeof(*resource)); 988 if (bar_addr & 1) { 697 if (bar_addr & 1) 698 { 989 699 bar_size = ~(bar_size & 0xfffffffcUL) + 1; 990 700 bar_size &= 0xffffUL; /* I/O address space is 16 bits on x86 */ … … 997 707 resource->IOResource.IOAddressLines = 16; 998 708 999 } else { 709 } 710 else 711 { 1000 712 bar_size = ~(bar_size & 0xfffffff0UL) + 1; 1001 713 bar_addr &= 0xfffffff0UL; … … 1007 719 } 1008 720 1009 ddprintf("BAR #%d: type = %s, addr = 0x%08lx, size = %ld\n", i,721 DPRINTF(3,"BAR #%d: type = %s, addr = 0x%08lx, size = %d\n", i, 1010 722 (resource->ResourceType == RS_TYPE_IO) ? "I/O" : "MEM", 1011 723 bar_addr, bar_size); … … 1020 732 { 1021 733 1022 switch(id) {1023 734 switch(id) 735 { 1024 736 case PCI_VENDOR_ID_AL: 1025 737 return "Ali"; … … 1056 768 1057 769 return "Generic"; 1058 1059 770 } 1060 771 … … 1067 778 int i; 1068 779 1069 for (i = 0; pci_ids[i].vendor != 0; i++) { 1070 1071 if (pci_ids[i].device == device) { 780 for (i = 0; pci_ids[i].vendor != 0; i++) 781 { 782 if (pci_ids[i].device == device) 783 { 1072 784 return pci_ids[i].chipname; 1073 785 } 1074 1075 786 } 1076 787 -
trunk/src/os2ahci/trace.c
r176 r178 4 4 * Copyright (c) 2011 thi.guten Software Development 5 5 * Copyright (c) 2011 Mensys B.V. 6 * Copyright (c) 2013-201 5David Azarewicz6 * Copyright (c) 2013-2016 David Azarewicz 7 7 * 8 8 * Authors: Christian Mueller, Markus Thielen … … 36 36 /* ------------------------ global/static variables ------------------------ */ 37 37 38 struct {39 u32 phys_addr; /* physical address of allocated buffer */40 u8 _far *tbuf; /* mapped address of trace buffer */41 u16 writep; /* current write offset in buffer */42 u16 readp; /* current read offset in buffer */43 u16 mask; /* The mask for wrapping the buffer pointers */44 } ahci_trace_buf;45 46 38 /* ----------------------------- start of code ----------------------------- */ 47 48 /******************************************************************************49 * initialize AHCI circular trace buffer50 *51 * NOTE: this func must be called during INIT time since it allocates52 * a GDT selector for the trace ring buffer53 */54 void trace_init(u32 ulBufSize)55 {56 SEL sel = 0;57 58 if (ahci_trace_buf.phys_addr) return;59 60 /* initialize ring buffer logic */61 ahci_trace_buf.writep = 0;62 ahci_trace_buf.readp = 0;63 ahci_trace_buf.mask = ulBufSize - 1;64 65 if (ahci_trace_buf.phys_addr == 0) {66 /* allocate buffer */67 if (DevHelp_AllocPhys(ulBufSize, MEMTYPE_ABOVE_1M,68 &(ahci_trace_buf.phys_addr))) {69 /* failed above 1MB, try below */70 if (DevHelp_AllocPhys(ulBufSize, MEMTYPE_BELOW_1M,71 &(ahci_trace_buf.phys_addr))) {72 /* failed, too. Give up */73 ahci_trace_buf.phys_addr = 0;74 cprintf("%s warning: failed to allocate %dk trace buffer\n",75 drv_name, ulBufSize / 1024);76 return;77 }78 }79 80 /* allocate GDT selector and map our physical trace buffer to it */81 if (DevHelp_AllocGDTSelector(&sel, 1) ||82 DevHelp_PhysToGDTSelector(ahci_trace_buf.phys_addr,83 ulBufSize, sel)) {84 /* failed; free GDT selector and physical memory we allocated before */85 if (sel) {86 DevHelp_FreeGDTSelector(sel);87 sel = 0;88 }89 DevHelp_FreePhys(ahci_trace_buf.phys_addr);90 ahci_trace_buf.phys_addr = 0;91 return;92 }93 94 /* create ring buffer address */95 ahci_trace_buf.tbuf = (u8 _far *) ((u32) sel << 16);96 }97 }98 99 /******************************************************************************100 * cleanup trace buffer101 *102 * NOTE: this function is here for completeness; the trace buffer should not103 * be deallocated and then reallocated.104 */105 void trace_exit(void)106 {107 /* free physical address */108 if (ahci_trace_buf.phys_addr) {109 DevHelp_FreePhys(ahci_trace_buf.phys_addr);110 ahci_trace_buf.phys_addr = 0;111 }112 113 /* free GDT selector */114 if (ahci_trace_buf.tbuf) {115 DevHelp_FreeGDTSelector((SEL) ((u32) (ahci_trace_buf.tbuf) >> 16));116 ahci_trace_buf.tbuf = NULL;117 }118 }119 120 121 /******************************************************************************122 * write a string to the circular trace buffer123 *124 * Note: This func wraps the buffer if necessary, so the caller does not125 * need to call repeatedly until everything is written.126 *127 */128 void trace_write(u8 _far *s, int len)129 {130 //NOT USED USHORT awake_cnt;131 132 if (ahci_trace_buf.phys_addr == 0) {133 /* tracing not active */134 return;135 }136 137 while (len) {138 if ( !wrap_trace_buffer && (((ahci_trace_buf.writep+1) & ahci_trace_buf.mask) == ahci_trace_buf.readp) ) break; /* buffer is full */139 140 ahci_trace_buf.tbuf[ahci_trace_buf.writep] = *s++;141 ahci_trace_buf.writep++;142 ahci_trace_buf.writep &= ahci_trace_buf.mask;143 144 /* keep the latest full buffer of information */145 if (ahci_trace_buf.writep == ahci_trace_buf.readp)146 ahci_trace_buf.readp = (ahci_trace_buf.readp+1) & ahci_trace_buf.mask;147 148 len--;149 }150 151 /* wake up processes waiting for data from trace buffer */152 //NOT_USED DevHelp_ProcRun(ahci_trace_buf.phys_addr, &awake_cnt);153 154 }155 156 /******************************************************************************157 * read data from circular trace buffer158 * returns the number of bytes written to the caller's buffer159 *160 * NOTE: the caller is expected to call this func repeatedly161 * (up to two times) until it returns 0162 */163 u16 trace_read(u8 _far *buf, u16 cb_buf)164 {165 u16 cb_read;166 167 if (ahci_trace_buf.phys_addr == NULL) return 0;168 169 for (cb_read = 0; cb_read < cb_buf && ( ahci_trace_buf.readp != ahci_trace_buf.writep ); cb_read++)170 {171 *buf++ = ahci_trace_buf.tbuf[ahci_trace_buf.readp];172 ahci_trace_buf.readp++;173 ahci_trace_buf.readp &= ahci_trace_buf.mask;174 }175 176 return cb_read;177 }178 179 /******************************************************************************180 * copy trace buffer content to character device reader (request block buffer)181 */182 u16 trace_char_dev(RP_RWV _far *rwrb)183 {184 u8 _far *to_buf;185 u16 cb_read = 0;186 u16 cb;187 USHORT mode = 0;188 189 spin_lock(com_lock);190 191 /* get pointer to caller's buffer */192 if (DevHelp_PhysToVirt(rwrb->XferAddr, rwrb->NumSectors, &to_buf, &mode)) {193 spin_unlock(com_lock);194 return (STATUS_DONE | STERR);195 }196 197 /* loop until caller's buffer is full or no more data in trace buffer */198 do {199 cb = trace_read(to_buf + cb_read, rwrb->NumSectors - cb_read);200 cb_read += cb;201 } while (cb > 0 && cb_read < rwrb->NumSectors);202 203 spin_unlock(com_lock);204 rwrb->NumSectors = cb_read;205 206 return(STDON);207 }208 39 209 40 /****************************************************************************** 210 41 * Create adapter/port/device list for user output. 211 42 */ 212 void build_user_info( int check)43 void build_user_info(void) 213 44 { 214 45 int a; … … 216 47 int d; 217 48 218 if ( check && (ahci_trace_buf.readp != ahci_trace_buf.writep)) return; 219 220 for (a = 0; a < ad_info_cnt; a++) { 49 for (a = 0; a < ad_info_cnt; a++) 50 { 221 51 AD_INFO *ai = ad_infos + a; 222 52 223 ntprintf("Adapter %d: PCI=%d:%d:%d ID=%04x:%04x %s %s irq=%d addr=0x%lx version=%lx\n", a, 224 ai->bus, ai->dev_func>>3, ai->dev_func&7, 53 NTPRINTF("Adapter %d: PCI=%d:%d:%d ID=%04x:%04x %s %s irq=%d addr=0x%x version=%x\n", a, 54 PCI_BUS_FROM_BDF(ai->bus_dev_func), PCI_DEV_FROM_BDF(ai->bus_dev_func), 55 PCI_FUNC_FROM_BDF(ai->bus_dev_func), 225 56 ai->pci_vendor, ai->pci_device, vendor_from_id(ai->pci_vendor), ai->pci->chipname, 226 57 ai->irq, ai->mmio_phys, 227 58 ai->bios_config[HOST_VERSION / sizeof(u32)]); 228 59 229 for (p = 0; p < ai->hw_ports; p++) { 60 for (p = 0; p < ai->hw_ports; p++) 61 { 230 62 P_INFO *pi = &ai->ports[p]; 231 63 232 ntprintf(" Port %d:\n", p);64 NTPRINTF(" Port %d:\n", p); 233 65 234 for (d = 0; d <= pi->dev_max; d++) { 235 if (!pi->devs[d].present) { 236 ntprintf(" No drive present\n"); 66 for (d = 0; d <= pi->dev_max; d++) 67 { 68 if (!pi->devs[d].present) 69 { 70 NTPRINTF(" No drive present\n"); 237 71 } else { 238 ntprintf(" Drive %d:", d); 239 if (pi->devs[d].atapi) ntprintf(" atapi"); 240 if (pi->devs[d].removable) ntprintf(" removable"); 241 if (pi->devs[d].dev_info.Method != NULL) { 242 ntprintf(" %ld cylinders, %d heads, %d sectors per track (%ldMB) (%s)", 243 (u32)pi->devs[d].dev_info.Cylinders, pi->devs[d].dev_info.HeadsPerCylinder, pi->devs[d].dev_info.SectorsPerTrack, 72 NTPRINTF(" Drive %d:", d); 73 if (pi->devs[d].atapi) NTPRINTF(" atapi"); 74 if (pi->devs[d].removable) NTPRINTF(" removable"); 75 if (pi->devs[d].dev_info.Method != NULL) 76 { 77 NTPRINTF(" %d cylinders, %d heads, %d sectors per track (%dMB) (%s)", 78 pi->devs[d].dev_info.Cylinders, pi->devs[d].dev_info.HeadsPerCylinder, pi->devs[d].dev_info.SectorsPerTrack, 244 79 pi->devs[d].dev_info.TotalSectors/2048, pi->devs[d].dev_info.Method); 245 } else ntprintf(" Drive present but no information available. Not queried by OS.");246 ntprintf("\n");80 } else NTPRINTF(" Drive present but no information available."); 81 NTPRINTF("\n"); 247 82 } /* if */ 248 83 } /* for d */ … … 251 86 } 252 87 88 #ifdef DEBUG 89 void DumpIorb(IORBH *pIorb) 90 { 91 DPRINTF(2,"IORB %x: Size=%x Len=%x Handle=%x CmdCode=%x\n", 92 pIorb, sizeof(IORBH), pIorb->Length, pIorb->UnitHandle, pIorb->CommandCode); 93 DPRINTF(2," CmdMod=%x ReqCtrl=%x Status=%x ErrorCode=%x\n", 94 pIorb->CommandModifier, pIorb->RequestControl, pIorb->Status, pIorb->ErrorCode); 95 DPRINTF(2," Timeout=%x StatusBlkLen=%x pStatusBlk=%x Res=%x pNxtIORB=%x\n", 96 pIorb->Timeout, pIorb->StatusBlockLen, pIorb->pStatusBlock, pIorb->Reserved_1, 97 pIorb->pNxtIORB); 98 } 99 #endif 100
Note:
See TracChangeset
for help on using the changeset viewer.