-
-
Notifications
You must be signed in to change notification settings - Fork 661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asynchronous IO API #9111
base: development
Are you sure you want to change the base?
Asynchronous IO API #9111
Conversation
std/haxe/Callback.hx
Outdated
//Callback is expected to be one-time and this cleanup is expected to help | ||
//to spot multiple calls | ||
var fn = this; | ||
this = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this line do anything? Surely other references/aliases to this function don't get set to null
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right. That's a pity. Any ideas how to guarantee that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can without wrapping the function in a closure or otherwise introducing state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed that for now.
What about wrapping the function in an additional closure, but only in -debug
mode?
This is an attempt to design a cross-platform API for big byte buffers (more than 2GB) | ||
without any unnecessary allocations. | ||
**/ | ||
class BigBuffer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is not used in the APIs? Also endianness should be configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, an intermediary Bytes
instance has to be used to pass bytes between APIs and BigBuffer
.
Added endian
property.
std/asyncio/net/Socket.hx
Outdated
import haxe.Callback; | ||
import haxe.errors.NotImplemented; | ||
|
||
class Socket implements IDuplex { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nadako made a good point – TCP and IPC sockets should be separate, let this be a base class for TcpSocket
and IpcSocket
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about that anymore. They share the API and can be distinguished by connection addresses. What would be the benefit of separating them like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could probably simplify the constructors and/or thngs like connect
.
The docs are a bit wonky in places, but this can be cleaned up later. |
std/haxe/Callback.hx
Outdated
@@ -0,0 +1,62 @@ | |||
package haxe; | |||
|
|||
typedef CallbackHandler<T> = (error:Null<Error>, result:T) -> Void; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically, result
can also be null. As mentioned before, it would be nice if null safety could somehow deal with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nullable result would generate annecessary allocations for basic types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for null safety, yes, I want to make it aware of Callback<T>
semantics. That's why Callback
API does not allow to provide both error and result at the same time.
std/asyncio/IReadable.hx
Outdated
|
||
/** | ||
An interface to read bytes from a source of bytes. | ||
**/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps something like a (strictly advisory) prepare(size:Int)
method would make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? from Int to Int
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RealyUniqueName I assume that comment was a reply to something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps something like a (strictly advisory) prepare(size:Int) method would make sense?
I don't know. Sounds like an implementation-specific.
std/haxe/Error.hx
Outdated
Error description. | ||
**/ | ||
public function toString():String { | ||
return '$message at ${pos()}'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about using Haxe's normal format (i.e. ${pos()}: $message
)? It will work out of the box in VSCode \o/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it's good approach for run time exceptions.
That VSCode argument only works for console apps or exceptions printed to console.
In general you want more visibility for an error message rather than error position.
AFAIR all the languages/runtimes provide exception message first and then position/stack info.
I don't like much If you need a package specific for low level classes, it should be something like |
std/haxe/Callback.hx
Outdated
|
||
The underlying function type type is declared in `haxe.CallbackHandler`. | ||
**/ | ||
abstract Callback<T>(CallbackHandler<T>) from CallbackHandler<T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems quite specific to package implementation, I'm not sure I would make it part of haxe package. Also, you might want a notImplemented callback instead of doing .fail(new NotImplemented()) for every function missing implementation ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect it to be useful for hxnodejs
and maybe other things once we implement coroutines.
@ncannasse I'll move everything to |
It's amazing to see Haxe implementing an async API!! I just wonder, what would be the technical implications to not use an |
Haxe will be getting both asys (an API similar to this PR) and coroutines in the future. Coroutines are a big undertaking because they have to be integrated properly into the language. It is not just a case of adding And, once we do have asys (with callback-style API) and coroutines, the two should be easy to join. It may be that our coroutines will be designed in a way that callback-style API is directly compatible with an asynchronous call. |
This goes to 4.2, according to our plans :) |
Sorry for intervention, but i'm worried you all missing some important points. This API does not seem to account for multithreading environments in any way. It's a straight copy of node js, which does not have that problem by design. Unfortunatly, most of the "sys" platforms support threading. Therefore i have some questions:
|
just to mention that deno 1.0 has been released |
The idea is that callbacks get executed in the same thread they were created in. That's supposed to be the common denominator of all of our (very different) targets. |
Yes, I thought about that and saw that Apple had unfortunately moved away from the SChannel / independent from the transport layer style their old api had (although I've seen one or two posts which implies the network framer api might be able to do something similar). I would be inclined to say that haxe apple users survived this long with mbedtls so could survive a bit longer if it means windows and linux users get a more flexible TLS api. |
Another thing I forgot to add was around tests. I've added tests for sockets now but the current implementation of those tests seems a bit dodgy. I spin up a new thread to setup a listen server using the existing blocking api, however I have a lock which is released as the last bit of code for that thread which the main test thread waits for before calling I had a quick look around the haxe repo and there appears to be no tests for the existing socket API so I've got nothing to go on from there, libuv tests its sockets using its own api, i.e. it sets up a libuv server for the libuv socket to connect to. This is probably a more reliable way of doing things but still feels wrong testing against ourself and not the existing synchronus api. Any thoughts on best approaches to testing this sort of stuff is much appreciated (probably extends to issues testing IPC, testing current process signals, current process stdio, etc, etc...). |
The main reason why I'd like to get rid of mbedtls, at least for Windows and macOS, is because it is a pain to deal with from a packaging perspective. Vendoring it doesn't work because then it's out of date real quick (hxccp's version was 5 years out of date before I updated it, which reminds me that it probably needs to be updated again). While a flexible TLS api could be nice (though I'm not sure how useful it would actually be in practice), I would very much prefer not to lock ourselves in to having to use mbedtls or another third-party library on macOS. (And the framer idea doesn't work, that's for implementing a message based protocol on top of tcp/tls)
Libuv should work fine on iOS. watchOS blocks BSD sockets though, so libuv's networking bits won't work there. On that platform you need to use high level apis like URLSession or (for specific usecases) Network.framework (https://developer.apple.com/documentation/technotes/tn3135-low-level-networking-on-watchos) I was thinking a bit about the reading apis. On Windows the "standard" for async IO is to use completion ports. The api there ends up pretty much the same as the current asys api, you hand a buffer to the api and you get notified when it's been filled. Java's AsynchronousSocketChannel has a similar API. On Linux with io_uring you can also do that, but for better memory effiency it's recommended to use buffer pools, where the kernel will select a buffer from a buffer group by itself. Apple's Network.framework is also completion-based, but instead of taking a buffer to write into, it hands you back a Python asyncio apis also return buffers instead of taking them. So the question is: might an api that looks like This would allow implementations to take advantage of eg IO uring buffer pools or use apis like Network.framework or python asyncio without incurring extra copies. |
I'm very much away of how crap vendoring is as a dependency mechanism, but to date there has been no mention of Mac specific TLS engines in any of the main repos issue boards so it doesn't seem like a massive issue to anyone. Changing the read function seems like increasing api complexity for the sake of something that won't happen. My understanding is that targets are going to be using libuv to do the heavy lifting, even if someone were to provide at least one alternative async io implementation for a target / os it probably makes sense to not accept it. Man power is forever a problem and the advantage of all unifying around something like libuv means implementation complexity is lowered and knowledge gained from working on one targets implementation can be easily transfered to another target. |
I'm bothered and when I get my hands on a Mac I would very much like to make an asys implementation backed by native Apple apis. And using the native apis has more advantages than just TLS. You also get built-in Happy Eyeballs support, compatibility with VPN On Demand, better handling of connectivity issues, and better performance thanks to an improved underlying network stack. Also, do you have a concrete example where TLS on top of not-TCP would be useful? I think this usecase is better addressed by external libraries, possibly in combination with mbedtls being exposed on platforms where the standard library uses it.
I could argue that adding a flexible TLS abstraction seems like increasing api complexity for the sake of some use case that might not even exist :)
Java and Python are not going to use libuv (I hope). For those two we should most definitely rely on their built-in async io capabilities, pulling in a native dependency would create needless complications.
libuv's far from perfect and we should not assume that we'll be using it forever. That said, we could leave the api as is for now and add a second reading api at some point in the future that offers a fast path for eg IO uring, and apps that care about performance could then select which api to use. |
Yep, it could certainly been seen as extra complexity. But considering pretty much all targets will be using TLS implementations which are independent of the transport layer (mbedtls, node, jvm) it won't make any difference to the implementations if the haxe side is locked to a socket or not. Access After CloseAre we defining behaviour for what should happen if a user attempts to call a function / read or write a property of a class after uv_stream_write byte countThe More read and write functionsPreviously mentioned that I think there should be more read and write convenience functions to match the existing class IWritableExtensions {
public static function writeInt8(writable:IWritable, v:Int, callback:Callback<Int>) {
final output = new BytesOutput();
output.writeByte(v);
writable.write(output.getBytes(), 0, output.length, callback);
}
public static function writeInt16(writable:IWritable, v:Int, callback:Callback<Int>) {
final output = new BytesOutput();
output.writeInt16(v);
writable.write(output.getBytes(), 0, output.length, callback);
}
} With this static extension class approach I'm trying to solve two problems. Allow targets to shadow this class and provide a more optimised implementation if they've got one and to have a single fallback implementation which doesn't need to be copied for each target which isn't shadowing it. This is an attempt to not have a repeat of the ifdef nightmare in Bytes.hx. Endian-nessThe Asys HTTP!Everyone seems to love whinging about |
Compromise: initially implement regular SecureSockets, and put off the discussion on whether to expose a more flexible TLS api to another time.
Returning a consistent error seems reasonable (though this should probably be passed via the callback instead of throwing).
It's probably correct, with a true async api writing less bytes that provided can only happen if the stream is closed which should give an EOF error.
It may be better to implement some kind of BufferedReader on top of IReadable, to avoid reading/writing a lot of very small chunks. Same for writing. As for files not implementing IReadable and co, the solution there is probably to expose a separate file stream class?
Haven't really got an opinion here.
People are doing things about it, just not in the standard library.
Imo HTTP should be a separate library, which could be pulled into the standard library at some point in the future, though I'm not sure whether we'd even want to have a full HTTP stack in the standard library, given that that would require HTTP/2 and 3 (and thus also QUIC) support. |
Passing a closed exception through the callback was my intention, but probably worded it poorly in my initial comment. My intention with the readable and writable extensions were that implementations could provide those functions to avoid needing to allocate haxe bytes and copy data into them. E.g. hxcpp |
Note that most if not all implementations of IWritable will not do any buffering by themselves. So I still stand by the idea of putting these utility functions on separate BufReader/BufWriter classes.
|
Object LifetimesGoing to take a quick detour to bring up this topic I've had brewing the back of my mind for a few months and this point and keep putting off. Take the following innocent looking code snippet where the user opens a file but doesn't explicitely close it, when should the libuv handle and any associated non GC resources be released?
The naive approach might be to free stuff in whatever finalisation method the target provides, but there are some annoying details which prevent this. Below is a quick summary of libuv handle management.
It's that third point which makes the finalisation technique a non starter as finalisation traditionally makes no guarentees about which thread it will be called from. Further more depending on what handle you're closing the callbacks of queued work may be called with an error response which could lead to haxe code being executed from the finaliser which is usually not allowed by targets which provide finalisation. The annoying thing is if There are maybe some things we could do to mitigate this. In the finaliser of objects which hold libuv handles add a reference to that handle to some thread safe structure of orphaned handles and have a libuv task which preiodically checks that list and cleans up handles and its resources found within. But these things start to get quite complex quite quickly. It was the realisation that calling Hopefully I've just been thinking about this for too long and I can no longer see the forrest for the trees, maybe there's something simple we can do to cleanup objects eagerly in the case when close isn't called. |
Libuv handles should have a reference to the loop they where created with (https://docs.libuv.org/en/v1.x/handle.html#c.uv_handle_t.loop), so in the finalizer you could just queue a cleanup callback on that loop, right? |
Some thoughts about the API design that I don't think have been raised yet:
|
You could queue up the handle for cleaning from the finalisers but you'd need to mostly DIY it and then use something like uv_async_t (the only thread safe libuv api) to signal the loop to check that list and perform the cleanup. The haxe objects would also need to know if the handle it holds has been automatically cleaned by the loop teardown, for situations where the haxe object is finalised after the thread it was created on has exited. This also seems like the sort of solution which is very easy to get wrong and have all sort of nasty race conditions hiding in wait, which is why I'm a bit hesitant to attempt it unless there's no other option. I've nearly finished the tests for the Don't have much to say about the link stuff apart from there are some OS specific test failures in that area which I need to get around to checking if that is due to a dodgy implementaion on my part or OS specific behaviour. |
Another thing I've been thinking about these last few days and hasn't come up yet it is cancellation. There are many asynchronous operations which may never complete for one reason or another (reading from sockets / stdio / anything which could produce data, waiting for process exits, etc, etc) and there are many situations where you may want to attempt some operation up until a certain point at which point it isn't relevant / wanted any more (e.g. having a timeout on reading user input from stdin). You could probably of emulate some of these situations by closing the object whenever your chosen cancellation criteria has been met, but the entire object is then un-usable as oposed to just an operation of your choosing currently being performed or waiting to be performed getting stopped. Part of the answer to this might also be in the coroutine design, in kotlin you can request the cancellation of a coroutine though a function on the object representing a launched coroutine. So if haxe coroutines end up having something similar that might be one approach for this. |
I've gone through and created issues in my repo for all the things I've mentioned along with giving them labels and updating my comments in this merge to point to them. I will continue to post everything here as well, I just wanted something easier to track, especially as half of my comments are tucked away and I have to keep clicking "show more comments" to find them. The I went back and added a bunch of server and socket tests once I figured out a good approach. What I've done is created a bunch of very small haxe "script" files for hosting a server or running a socket using the exisitng synchronous API. I spawn a haxe process running one of these scripts on the interpreter for the test to interact with. This should mean its portable across all other targets as well. Before I start on my system package notes there are a few more network related ones which came about from writing these tests. Tcp Socket Buffer Size is a SuggestionOn Windows if I set the send and receive buffers of a TCP socket to something like 7000 and then fetch the value it comes out as 7000, but on linux fetching the value returns 16384. I'm guessing these buffer sizes are more of a suggestion to the OS and its free to choose a different value. So, should our API recognise this? Might be be better to have these functions return the value which was set, similar to how write functions return how many bytes were actually sent vs what you offered. Send and Receive Buffer Sizes Unsupported on Windows PipesSetting or getting the size of pipe buffers on windows is not supported by libuv. Closing Server with Pending AcceptsIf the user has called System PackageProcess PID not a propertyThe pid is stored as a field, not a property on the process, this isn't a problem unless we want to go ahead and say after closing accessing any functions or properties will result in an error which we can't do with a field. Null stdio streams if not pipedIf the user specifies that the stdio streams should be, say, ignored, are the stdin, stdout, and stderr properties expected to return null values or throw some sort of exception? I'm guessing throw since they're not null annotated. But the base process class provides all streams as an array with the first items being the stdio streams, so should they be null here? Multiple exitCode callsIf the user makes multiple Closing spawned process stdio pipesIf the user pipes the stdio do they need to call Process cwd optionThis option takes a string when it could probably take the new FilePath abstract instead. Process User and group ID on WindowsThese libuv options are not supported on Windows but are exposed in the process options structure, this probably just adds more fuel to the fire that the current permissions design doesn't really work. But, if we do want to keep it what should happen if they're supplied on windows, silently ignore them? return some sort of not supported error? Non Existing CWD Causes Confusing Error MessageMuch like the user in that issue I started questioning my sanity when I started to receive "file not found" errors when writing tests against a program which definitely existing. Turns out it was a bug in my Process Exit Code Not 64bitlibuv exit codes are represented as uint64_ts but they get truncated to ints on the haxe side. ProcessOptions user and group uses Ints not SystemUser and SystemGroup abstractYou can se the uid and gui on a spawned process but these options are integers not the Should Spawned Processes Keep the Event Loop AliveIf the user spawns a long running process I think it makes senese that the event loop will continue to run (and therefore that thread not exit) until the process has exited. However I don't think this makese sense for the detached case as the entire point of that option is that the process continue to run after the parent closes. Unclear Process Environment Variable BehaviourAre we copying libuv's env behaviour? That is, if the user provides environment variables it does not inherit the environment of the parent process. function test() {
final env = "FOO";
final value = "BAR";
Process.open("program.exe", { env: [ "BAR" => "BAZ" ] }, (proc, error) -> {});
} In the above example the spawned process has access to Closing Stdio PipesClosing stdio pipes isn't something you really want to do and has inconsistent behaviour on libuv. So, what should we do if the user calls StdioConfig File PipesThe We could add extra config options for these other cases but at that point it might be better to provide a more generic pipe mechanism as currently the user can't, say, automatically send a file through a socket. This would also greately reduce the complexity of the process implementation. Alternatively don't provde any fancy pipe stuff and let the user do it themselves. Process.execute StdioThis function is designed to be a shortcut for calling a function but it accepting the same process options as the full Process.execute("haxe.exe", { stdio: [ PipeWrite, Ignore, Ignore ] }, (error, result) -> {
trace(result.stdout, result.stderr);
}); |
Two more general points now More
|
Here's an "executive summary" of the hxcpp asys implementation and test suite as is since I've not implemented most of the API and have tests in most areas. My hxcpp asys branch is here (https://github.com/Aidan63/hxcpp/tree/asys) and the haxe side is here (https://github.com/Aidan63/hxcpp_asys). The haxe side only requires a 4.3.x build and shadows the ~95% of the api is implemented, It's possible I've missed some things but these are the things I've explicitely left unimplemented.
I have made some changes from that API in this merge which should be mostly uncontroversial.
Test suite uses utest2 and should be easy to run on targets other than hxcpp. There is one set of filesystem tests which peek at the underlying IPC has zero tests right now as I'm not sure what to test against (no synchronous equivilant). Secure sockets also have no tests, need to figure out what to do with certificates and stuff (would be nice to have tests which don't all disable certificate verification and go out to some external url). Socket keep alive stuff is untested, it's hard to do this without making the test suite take an age to run if we can't specify the keep alive time. Many more error case tests could be added to the process api but I've left these for now as I've mentioned quite a few things around process api so will wait until those are cleared up. Everything related to permissions and user / group ids is untested. There are some areas (sockets and process io pipes) where I wanted to add more complex tests around reading and writing lots of data, but the callback style drains my will to test for these more complex ones. Migrating this suite over to coroutines once available will massively increase the maintainability of it all. Does utest have any parametric test functionality? There are several places where I want to run the same test against different input and have basically had to do it manually with a loop. But this makes it a pain to see which test cases are actually failing. As a final test metric this is what I've got so far, 1822 assertions across 415 tests.
There are still some things to do in the hxcpp implementation (SecureSocket for non Windows, general cleanup, investigate some of the legitimate failing tests I've got), but I think the next thing to do is probably start making decisions on some of the mentioned. |
Regarding the UID/GID stuff, I would just remove that from the API for the time being. (See https://github.com/Apprentice-Alchemist/haxe/tree/feature/refactor_std/std/sys/fs for my take on a (synchronous) filesystem api) For now we should focus on exposing portable functionality, platform specific stuff is a problem for later. |
I'd agree with removing the UID and GID stuff, but I don't keeping read only is much better either as there's no easy way to make it consistent (and useful) across platforms. Take rusts readonly permission, on Windows it just checks the readonly file attribute which is a hold over from the OS/2 days and not an actual access control mechanism, because of this it can report that a file is writable but trying to write can then fail if there is a deny ACL for the given user. We could document all of that, but I'd imagine most users will have very little knowledge of the underlying permission systems and even if they did read it it would be in one ear and out the other. Leading to code which naively uses some readonly property / function to check if they can write to a file and ignoring any errors which might come about from actually trying to write. Best thing to do if you want to see if you can actually write to a file is try and open it in a write mode, we should probably be encouraging that instead of providing questionable shortcuts. |
Not exposing the readonly bit is fine by me. |
@9Morello didn't realise you reacted to my response in the coro pr, but I'll move it here since its probably a better place. I don't think something like libxev would make things easier, the docs are very sparse and makes no mention of its threading model. It seems heavily inspired by libuv so I'm guessing its the same, which is a big pain point of libuv. Not to mention "Schrodinger's Windows support" (readme mentions Windows is both supported and Windows support is planned)! If something else were to be chosen I'd want it to not be a C library. Working with libuv has been a good reminder as to why I steer clear of these big C libraries. Macros to "fake" inheritance, passing around data as void pointers, having to deal with function pointers, etc, etc. What an absolute pain! |
Whats the relationship between asys and the thread event loop? I was debugging an issue from how I've integrated the two but then realised I've made the assumption that they're related. E.g. is the following expected to work? Thread.create(() -> {
asys.native.filesystem.FileSystem.openFile('foo.txt', Read, (_, _) -> {
trace('complete');
});
}); In my implementation this throws an exception as it piggy backs off the thread event loop, which This sort of event loop integration was briefly mentioned in the coroutines PR, so there might be a lot of overlap here. On the topic of the process api I'm proposing the following changes / simplification.
So overall the class looks like enum abstract StdioConfig(Int) {
var Redirect;
var Inherit;
var Ignore;
}
typedef ProcessOptions = {
var ?args:Array<String>;
var ?cwd:String;
var ?env:Map<String, String>;
var ?stdin:StdioConfig;
var ?stdout:StdioConfig;
var ?stderr:StdioConfig;
var ?user:Int;
var ?group:Int;
var ?detached:Bool;
}
class Process {
public var pid(get, never):Int;
public var stdin(get, never):IWritable;
public var stdout(get, never):IReadable;
public var stdin(get, never):IReadable;
static public function execute(command:String, ?options:ProcessOptions, callback:Callback<{?stdout:Bytes, ?stderr:Bytes, exitCode:Int}>);
static public function open(command:String, ?options:ProcessOptions, callback:Callback<ChildProcess>);
public function sendSignal(signal:Signal, callback:Callback<NoData>);
} This does still leave questions about the |
Three other things I noted down while doing some cleanup. isFile and nullRealised that my IOExceptionsThe IOException type is rather awkward to use, especially if you look at it from a coroutine point of view. If, for example, I'm only interested in the file not being found and I'm fine with other exceptions being bubbled up I'd have to write something like this. try {
FileSystem.info(path).modificationTime;
// do other stuff
} catch (exn:IOException) {
if (exn.type.match(FileNotFound)) {
return false;
}
// rethrow exceptions we're not handling...
throw exn;
} Usually I'm all for reducing inheritance but with the built in type checking of catches, having exception sub classes for each exception type makes for nicer code and less likely to accidentally swallow errors. try {
FileSystem.info(path).modificationTime;
// do other stuff
} catch (_:FileNotFoundException) {
return false;
} Calling close on alive processesOne I found while going over some libuv docs, you shouldn't call |
Had an idea the other day which might solve several questions around threading, lifetimes, and some coroutine stuff. I've been operating on a 1 haxe thread, 1 libuv loop principle. But what if there was a dedicated background thread which just ran the libuv loop and which all asys api calls were serialised onto. I think this approach solves several problems. Cross thread usage. You no longer have the limitation of asys objects only being usable on the thread they were created on. Passing asys objects around threads might seem odd but given that there as been a fair bit of discussion about coroutine scheduling, it might be possible for coroutines to resume on a threadpool which means asys objects being usable across threads is important and this approach would allow that. Resouce management. A single libuv thread owned by the runtime makes object lifetimes much easier. If close hasn't been called by the time the asys object get finalised, it's easy to schedule the close on dedicated the libuv thread. You no longer have to worry about tracking haxe threads, which objects were created on them, is that thread still alive, etc, etc. Not tied to the thread event loop. This would also free asys from the thread event loop and would allow a much easier blocking There are potentially some downsides though. This approach would probably work fine for hxcpp and hl where they have their own runtime, but if another target which doesn't really implement its own runtime and wants to use libuv for its implementation, that might make things a bit more difficult. This single thread could become a bottle neck as work will have to be placed in some sort of thread safe collection which can be picked up by the libuv thread to process. This might not end up being a problem under normal conditions though. |
This is probably more coroutine related than asys but it's a follow on from the above comment so I'm putting it here. Played around with the global libuv loop idea and it seems to work well, converted a very small subset of my asys stuff to use it (file open, write, and close). A coroutine implementation of @:coroutine public static function openFile<T>(path:FilePath, flag:FileOpenFlag<T>):T {
if (path == null) {
throw new ArgumentException('path', 'path was null');
}
return Coroutine.suspend(cont -> {
cpp.asys.File.open(
path,
cast flag,
file -> cont.resume(@:privateAccess new File(file), null),
error -> cont.resumt(null, new FsException(error, path)))
});
} The This makes implementing a blocking coroutine // possible `start` implementation.
final loop = new EventLoop();
final blocker = new WaitingCompletion(loop, new EventLoopScheduler(loop));
final result = switch myCoroutine(blocker) {
case Suspended:
// wait will pump the provided event loop until its `resume` is called indicating the coroutine has completed.
blocker.wait();
case Success(v):
v;
case Error(exn):
throw exn;
} An enhancement / slight alternative to this global loop which might help libuv integration with other targets could be to have some sort of I've created a new branch in both my hxcpp fork and hxcpp_asys repo with this small global loop test. |
I've got quite a bit about coroutines in Haxe over here if it's any help: stx_coroutine |
Lower-level async API. Take 1.
I tried to stay close to C API.
Please, review thoroughly. Especially the socket-related API, because I don't have much experience with sockets.
Also, your thoughts on TODOs are very appreciated.
While I tried to define the behavior in doc blocks I expect things to change during the implementation journey for the sake of cross-platform consistency.
filesystem
net
TODO
processes
TODO