Home < VR Bitcoin < Introducing LND 0.10

Introducing LND 0.10

Speakers: Olaoluwa Osuntokun, Joost Jager, Oliver Gugger

Date: April 18, 2020

Transcript By: Michael Folkson

Tags: Lnd

Media: https://www.youtube.com/watch?v=h34fUGuDjMg

Intro (Laolu Osuntokun)

Thanks everyone for coming. This is my first time doing a VR presentation, really cool. I am going to be talking about lnd 0.10. Right now we are on rc2, the next major lnd release, it has a bunch of cool features, bug fixes, optimizations. I am going to talk about some big ticket features in 0.10. There are a bunch of other features that you are going to want to check out in the release notes. We will have a blog post talking about the release.

New Channel Type: Anchor Outputs

First I am going to talk about a new channel type, we call it anchor outputs. This work was primarily done by Joost and also Johan on my team. One of the things I want to do first is talk about the motivation. There are some issues with fees on the Lightning Network right now to make transactions work. Right now the initiator pays the fees for the lifetime of the channel. The initiator is going to pay the onchain fees for the funding transaction itself. They are also going to pay the fees for the entire duration of the channel. Obviously this is simple but it has some other issues. Whenever you do a cooperative close you are able to negotiate the fee and go back and forth. It is not really defined in the specification but you can have some sort of service satisfaction algorithm to ensure you are able to eventually close things out. When you force close unilaterally you are locked into the fee of that closing transaction ahead of time. This is a little bit different. Typically whenever you broadcast something you can estimate at the time of broadcast and see exactly what is going on with the mempool and whatever else. In this case you are forced to guess ahead of time exactly what the fee will be in order to get the transaction in a block in time. This can be an issue. Let’s say you were offline for a period of time and fees rose significantly you wouldn’t be able to upgrade your fee on that closing transaction itself. Typically whenever you’re doing a force close you are doing so in order to redeem some HTLCs. The HTLCs are a multi-hop thing. If the incoming one is at risk of timeout you can’t get the outgoing one. You could lose the funds of the HTLC itself so it is a big issue. So we created this new commitment type called anchor outputs. In Lightning this is probably the third commitment type we have in terms of upgrading it. We have the OG one, we have one where you set up a static remote key and this we are calling anchor outputs. You need someone that has this new feature bit and then you can use this new channel type. In a nutshell we now have these two new outputs on the commitment transaction. Typically your transaction will have two outputs. I have my coins going to me and you have your coins going to you. We both have this anchor output funded by the initiator. It is not really that expensive. The initiator now only puts 660 satoshis extra into the channel at that given time. We call it anchor outputs because it lets you use CPFP or child-pays-for-parent to anchor down the transaction. Johan is credited with the name on a mailing list post three or four years ago. In essence you can now decide on fees whenever you broadcast. You no longer have to get the fee rate correct to get into the next block. You just need to get it into the mempool itself. Once in the mempool I can use CPFP to adjust my fee rate in time and ideally everything will be ok. It is not like a regular script. It is a pay-to-witness-script-hash script. It has a new CSV clause. The reason we have this is because any individual can pin those outputs. People will probably be writing these scrapers to check the blocks, check for this magic value, the output size and sweep the coins yourself. The protocol is producing these outputs over time which can be swept by other individuals or be swept by yourself depending on whether you think the fee rate is proper or not. One other quirk that we have here is that every single spend paths has a CSV of 1. This is due to some edge cases discovered in RBF (BIP 125). It is possible for an individual to impede the confirmation of a transaction forcing you to replace a very long heavy weighted transaction branch. We are super happy we got this deployed. Right not it is still pending. We figured that we’d get it into lnd with this new version.

In terms of the new things in lnd that we did to make this work properly. These are things that can be used for other purposes as well. We have something in lnd called the Sweeper. It supersedes something that we had called the UTXO Nursery. It has the same purpose. It is a generalized sweeping engine in lnd. Whenever you are doing Lightning you have your commitment output, you have your HTLC output, you have other things as well too. We can give this to the Sweeper and it will do its job. We add a few different things. One thing we call a force sweep. Typically if the output itself is 330 satoshis it maybe wouldn’t be economical. The Sweeper would normally only sweep if economical. We need the ability to force it to sweep an output even though it is economical. There are also exclusive groups. The way it is now is that at any given point there is going to be three different commitment transactions that are valid. There is ours, there is the one we sent the other party that hasn’t been revoked yet and there is the pending one right there. We have to anchor down all three at a given point in time because we don’t know which one is going to be confirmed. We need to make sure that all of these inputs are not in the same transaction and that we don’t have a double spend. Now we have a gRPC command that makes that easier. We have WalletKit.BumpFee which is something you can use to do CPFP across any transaction that lnd controls or even an external channel. Now we have this wrapper called lncli wallet bumclosefee which lets you bump the fee on a pending force close transaction. You can also do it for a cooperative close as well. It gives you some additional stuff there. It is an opt-in feature right now. You enable it with —protocol.anchors. If I add the anchor bit set and you add the anchor bit set we are going to use this new feature going forward. This is something that other implementations are starting to implement themselves now too. It will become more of an official thing. Right now it is only a lnd thing but we don’t imagine it will change too much. We are also working on some other features to let you update your commitment type dynamically without closing your channel out.

Tor Enhancements

As you know lnd has pretty good support for Tor right now. One thing it has is you can have a Tor v3 auto hidden service. In the command line you can make a new hidden service for you. One thing we did was further abstracted the way we store the private key in Tor itself and allow you to store it elsewhere. Let’s say you want to have 3 lnds that all have the same hidden service. You want to have that private key stored elsewhere, maybe in some distributed storage. Another thing we did was to have more complex integrations with Tor in lnd itself. We support a HASHEDPASSWORD which lets you authenticate the core daemon without using the auth cookie. In order to use the cookie you need to read the same filesystem as the Tor daemon itself. This gives you a hashed password similar to what some other daemons use. One other cool thing we have is Tor support for watchtowers. Similar to how you do for lnd we have —tor.active and —tor.v3, that is your regular hidden service for Tor. We also now have —watchtower.active which allows you to create a new hidden service for your watchtower. Importantly the new hidden service is isolated and you have a distinct onion address for this new Tor service. You can have a TorTower node which you can give to your friends. You can use lncli tower info command to see what the new address is for your Tor node.

New HTLC Event Subscriptions

Right now we have something called ForwardingHistory. The lnd software gives you information about all the fast forwards in your node which makes it easy to compute the histogram of the payments that you forwarded. You can compute your fee revenue and all that stuff. One thing it lacks is information of any accepted HTLC forwards and failed forwards. One thing you realize that information about failed forwards can be very useful for optimizing your own node. Failures tell you when you need to reallocate liquidity on your node. Now we have this new SubscribeHtlcEvents call. It gives you a ForwardEvent which is a HTLC establishment. This is cool because you can time the amount of time for a HTLC to get fully extended to provide a picture of the latency in the network. With a ForwardFailEvent this is where you’ve got a HTLC, you went downstream and it got cancelled back. We have the SettleEvent which is the normal this gets accepted. We have the LinkFailEvent which is where we got a HTLC in and we try to forward it to another channel but that channel rejected it. Maybe the reason was insufficient fee or incorrect update policy. It should be extremely integrated into our project called lndmon which is kind of like a combination of Prometheus and Grafana. Ideally you want a graph of your HTLC activity over time. Maybe you see a burst of activity at 2pm every day. This tool allows node operators to hone in and see exactly what they need to be doing as far as rebalancing, opening and closing a channel. We want to make sure node operators have as much information to make intelligent decisions about their channels.

RPC Enhancements

We have a bunch of RPC enhancements. These are the main things I selected as notable. One thing we have is a new version endpoint. This is something people have been asking for a while. In lnd we upgrade the RPC system over time and it can be difficult to tell what capabilities that node offers. Now we have a new version endpoint that you can hit. It will show the Go version it was compiled to which is useful for reproducing builds. Now we have fully deterministic, fully reproducible builds. On my Mac I can build a binary from any other machine that we support which is probably too many. We have PowerPC and stuff like that on there. We also have the lnd commit hash and the build_tags. The build tags are pretty important because they are related to sub-servers which are a new form of servers. These will now be less experimental in this new version. They will be on our main documentation. This is cool because anytime you have a wallet application it can hit this new endpoint version and see lnd 0.10 added those features, I can use those now. It is something that people will realize is neat as we made breaking changes back in the day. We make breaking changes less frequently now but we still figure it is pretty useful. Another thing we have is ListPeers. In the Lightning Network you can send an error to a peer. Maybe the error is critical or non-critical. Maybe the error is I don’t like your HTLC. Up until now that error was always buried in the logs so you’d have to look through the logs. Maybe you’d use grep but it was hard to find. Now we have something similar to a mailbox. It will store the last 5 messages any peer sent us. Now you can see what messages were sent to you from that peer. Another cool thing about this is that it can be used as a rudimentary communication channel between you and your channel party. I can send you an error because I know you are going to store it that says “I am going to close the channel in three days because you are not being a good node operator.” We’ll see what UIs we have in future. We now have pagination for ListPayments. There is a response size limit for the gRPC which right now I think is 5MB, a pretty big block size the 5MB response coming through there. What happened was that people were reporting that this was insufficient when they had over 100,000 payments. We’ve paginated this thing and now we have a similar tool to ListInvoices which is pretty nice. Final thing is we have something called Stdin unlock. Up until now the only way to unlock lnd was to do it over the gRPC or interactively the way we have it in lncli. Now we have added a new command where lncli can now accept certain parameters over Stdin. This is cool because you can use it to have more complex situations using Docker or Kubernetes to make sure you can restart lnd on the fly without having to do manual interaction.

Multi part/path/shard payments (Joost Jager)

My part is about multi part or multi path. Is it part or path? Both things have been used so far. But now we are also adding the word shard. I saw some jokes on Twitter about Lightning does sharding now. I would like to start by repeating the problem. I have seen this many times at live meetups, people trying to pay with Lightning in a bar. They have a channel, a 20 euro channel. It is fine for paying for their drinks. At some point they need to pay for dinner. The dinner is 30 euros. They think “20 euros is not enough. I am going to open another channel worth 10 euros. That makes a total of 30.” They try to make the payment and they have waited for the confirmations required and it still doesn’t work. Then they realize that in Lightning up until now it is necessary to be able to make the payment in one shot. You can only utilize one channel to make a payment. After opening the additional 10 euro channel they need to open one more channel worth 30 euros and use that to make the payment. This is not a great user experience. The first time I heard about Lightning there was already talk about multiple paths and making them atomic to get around this problem. It is a very old problem already. The solution for this has been anticipated for a long time. The basic idea is that as the sender you are able to split your payment in multiple shards. The receiver waits for all of them to arrive. Once they are all there the receiver will settle those shards and then they will get the full payment amount. Support for this on the receiver side was already added in lnd 0.9. In lnd 0.9 the receivers running that were already able to assemble multiple shards. Other implementations they were a little bit ahead of us. They were already able to make multi path payments. Also with lnd it was possible to make a multi path payment using manual sendtoroute commands. You weren’t using the pathfinding of lnd and the payment loop. But you could launch the shards manually. Now in 0.10 we finally added send support to our software. One thing to say about this assembling of shards, is this an atomic thing or not? It depends how you look at it. Once those shards start arriving at the recipient they will all have the same hash. If you have the preimage you can settle them as soon as they come in. In that sense it is not atomic. On the other end if you receive and you sell a HTLC while the full set isn’t complete what you do is you receive the proof of payment but you haven’t received the full amount. You may possibly never receive it. There are some incentives there for the receiver not to do that. Even though they can settle a payment non-atomically it is not a smart thing to do. What happens if you are waiting? The recipient is waiting for the settlement to be complete. It doesn’t always happen because one or two of those shards could arrive but there are a few parts missing and they don’t arrive. What happens then is a timeout comes into effect and the recipient will cancel back those shards. If you are making multipath payments and for some reason the set doesn’t fully arrive at the recipient after a timeout which is currently fixed at two minutes those shards will be cancelled back. They won’t be stuck for a long time. Two minutes and you will have your money back. It could be that you are resending. Suppose the first shard of the set arrives and then the second shard arrives one minute later. It could be that because it took so long for the second one to arrive that the first one gets cancelled. But what senders do, at least in the implementation we made is those shards are retried. As long as the sender still allows time to try to complete the payment you will try to keep sending those shards and aim for a complete send at the receiver side.

Other problems

This was a very simple example, it is just a sender and a receiver. But there are also other problems that we want to solve. What could also happen is if you are a sender and you have a big channel that has sufficient balance but the recipient has enough total balance but there is not enough balance in a single channel. This can also happen and you cannot make the payment. You don’t know why because you can only see the balance of your own channels. Also in this case it is useful to send a multipart payment. The third option is that both the sender and the receiver have enough balance in their channels to complete the payment only somewhere on the network there isn’t enough. This can be worked around with sending multiple parts again. It is not just about completing the payment. It is also more efficient uses of capital. You can use the capital in your channels in a better way because you can take fragments of the payment amount. It is also true that you can now much larger payments because previously the maximum size for payment was bound by the maximum size of the channels along your route. Pre-wumbo the maximum channel size is 16 million satoshis. A bigger payment than that wasn’t possible. Now with multi path payments you can easily get around that. Yesterday we played around on regtest and all of us paid a 1 BTC payment with Lightning using 20 something paths.

Q - I succeeded with a 2 BTC invoice on regtest.

Laolu - I succeeded with a 0.1 on testnet.

That is also quite interesting. Another thing you can do is perhaps get better fees and better timelocks on the routes that you choose. Previously there were only so many channels that were able to carry that. If you set a low fee limit you couldn’t find any route that could carry it because all the big channels are expensive. Now it is not a problem anymore because you set a lower fee limit, the big channels will be skipped and automatically the payment will be sharded and smaller, cheaper channels will be utilized to complete the payment. The same is true for reliability. If you have a big channel that has proven to be unreliable, with sharding you can get around that and use multiple smaller, more reliable channels. There is also a trade-off because if you launch multiple HTLCs there is more risk that something goes wrong with any of those. A final thing of why you would want to do this is to obfuscate the payment amount. If you are a node connected to a well known destination you could see what kind of amounts are going towards that destination and now you aren’t that sure anymore. You as a node forwarding could be just one shard of a payment that has a bigger total value. For smaller routing nodes you can participate in relay of the bigger payments.

How to split?

How do we split? The splitting algorithm, a lot has been said about this already. It is a really difficult problem. If you look at the graph I have here you know very little. You know the topography of the graph, you know your own balances but you have no idea of what the balances of all the other channels on the network are. You do know the capacities but even the capacities use a lower bound because people can open shadow channels meaning that they open multiple channels that are only private. They are not advertised on the graph but they can still carry payment. You have very incomplete information. What could even happen is if you search for a path through this graph and you make a wrong decision. Wrong means you spend a balance in some channel on the network in a way that prevents you from getting the second shard to the recipient. There is also the dynamic interaction between multiple of your own HTLCs that you send out. That also makes it hard.

Failed attempt, now what?

Then suppose an attempt fails. What do you know? The thicker lines are the path that you chose. There is a failure between R3 and R5. We tried to pay 20 so at least we know that connection can’t take 20. Other connections can take at least 20 and from R5 to the recipient we still don’t know much. If you think about the optimal solution for this problem it is just very difficult. If a shard succeeds no feedback is given. That is a current characteristic of Lightning. If you send a payment and it arrives, the recipient is holding it similar to a HODL invoice, you won’t hear anything back. It could be that this payment got stuck along the way or that it reached the destination. You don’t really know. If you are making a MPP payment you are sending those shards. If your HTLCs get stuck it is actually a good thing because they are held by the receiver. It could also be that they are not held by the receiver. That is another thing that is difficult. It also means that you need to launch those HTLCs concurrently. You can’t launch the first one and wait for it to arrive because you have no way to figure out that it arrived. You need to launch them concurrently but you also don’t want to launch them concurrently too fast because otherwise you don’t allow the system to return failures to you which can be useful to plan the sending of your next shards. Then there are optimization goals. It is not only about completing the payment but you also want to optimize for fees, for timelock, for reliability. Maybe you want to rebalance as you go. Try to pick the channels and make the payment through those channels in such a way that your channels end up more balanced than they were before you did this. Finally the pathfinding algorithm that we have, it works backwards. If you think I want to spend 10 through this channel. In the current implementation it is not really possible to do that because we start at the recipient with the value that we want to deliver to the recipient and then we search backwards. Maybe it comes out like you need to pay 12 to get 10 to the recipient. The code at this moment doesn’t have full control over how much we send out through every channel. I think this is quite a lot of information. Just to highlight how complicated this problem really is. First we had the idea of let’s see how far we get. Very soon we said “No we are not going to see how far we get. We are just going to do the simplest thing that is possible.”

Divide and conquer: the Halvening

The halvening, another halvening in Lightning. It means that if we fail we are going to try again for half the amount. It is very contrary to all the complexity that I tried to explain. The algorithm that we implemented is very basic. To make it clear I give an example. As a sender they have enough liquidity. They need to pay the recipient. The amount that they pay is 28 and they have no idea that between R1 and the recipient there are channels with balance 10 and 20. The first try will be for the full amount. They try 28. It will fail. Then they try for half that amount. They try for 14. 14 succeeds meaning that I am at a second bullet. The 20 channel goes down to 6. We have two channels, 10 and 6. We try 14. It fails again. We try 7. It succeeds. Then we are at 3 and 6. We try 7. It is not possible anymore. We try 3.5, it succeeds. In the end we completed the payment and we had four failures and five successes. We sent the payment in five stages while theoretically you could do it also in two. You could send 10 and 18 and it would also be fine. The thing is that we are not only sending the payment but at the same time we are probing the balances of those channels on the network to figure out what it can carry. This is far from optimal but it does work for a lot of cases. It gets a bit more difficult if the payment amount is very close to the theoretical maximum. It is not optimal, the algorithm has difficulty to find a solution. We still have to see how often this is going to happen in practice. Everything here is super new. I have no idea how it is going to work out. It will be interesting to see what feedback we will get on this first attempt. Also to keep the halvening in hand, we added two restrictions. We set a minimum shard amount. We are not going to split down to a single satoshi. At some point we just stop splitting and we say “if it doesn’t work now we give up.” It is currently set at 10K sats. Your shards will never be smaller than 10K sats unless there is a remaining amount for your payment that is less than that. The second thing is you can control the maximum number of shards. The reason for that is that if you didn’t have a maximum, especially if we launch a lot of payment attempts at the same time through different channels, that creates problems of its own that I explained before. We don’t get any failure info while we have already launched a whole bunch of attempts. This is also not good. The max shards is not a parameter that you ask the user to set. For now because we are at the very beginning of this field of research we just expose it and we will see how people use it.

What is my LND doing?

One other thing is because so much is happening now, there are multiple concurrent attempts that can fail, new attempts are looked for, it became necessary to provide better feedback. Even during development it was just hard to figure out what is happening now? Does it work or doesn’t it work? There are two things that are currently available. One is still a PR but that is an improvement of lncli that allows the sendpayment or the payinvoice command to give a bit better feedback. I also tweeted about this. Instead of huge amounts of JSON it will print out a table and every row in the table is the HTLC and it shows what the status is. Is it in flight or is it finished? Succeeded or failed? What route was taken? It gives you a better feeling of what is going on if you make a payment. In particular a MPP payment. The second thing, this is in a repo of my own, this is a tool that takes the output of listpayments or the output of sendpayment. This output describes all the HTLCs, when they were launched and when they were resolved. It creates a timing diagram out of that. This is also very useful to see what is going on and to investigate particular issues.

Multi-loop out

The final thing I want to talk about is multi-loop out. A very exciting possibility is that if you loop out you can do this with multiple channels at the same time. I assume everyone knows what loop is? What you can do with loop is you can change the balance of your channels. Basically you make a payment to the loop server and this will shift the balance of your channels allowing you to in future receive money again. The money that you pay to the loop server is sent back onchain in a non-custodial way. Currently this is only possible in a single channel. You send out one payment across one route and you get it back onchain. You also have to pay chain fees for that single channel loop out. You get the money back onchain but you need to sweep that. It is one chain transaction. With multi-loop out you have a bunch of channels and you use all of them to make a single Lightning payment to a loop server and you get it back also in the same transaction onchain. But with a single onchain transaction you have looped out lots of your channels. Suppose you are a merchant, you’ve got 20 channels, they all got depleted meaning that people kept paying you until you could receive no more. Then with one loop out request you can loop all the balance out of those channels, push everything out to the loop server indirectly and then receive the money back onchain. You can receive directly into your exchange deposit address.

PSBT channel funding (Oliver Gugger)

I think it was Marty who wrote in Marty’s Bent that 2020 would be the year of PSBT. I agree and this is our part we are doing. With lnd 0.10 you will be able to open channels using PSBT. It needed some preparatory work in 0.9. Laolu did some work to make it possible. Also we needed the library that was finally merged in btcutil, the PSBT library by Adam Gibson. These two things allowed us to make this feature work. On the user interface it is pretty small. You only get a new flag —psbt to your openchannel command and that will launch into an interactive dialog with the command line. It will tell you what to do and interact with it and finally give it a PSBT. I decided not to do a live demo but there is example documentation in the repo. If you go to the docs you can go through that. It is pretty cool on the command line but this will be much more useful if it is automated through the gRPC interface. We hope a lot of wallets will make use of that. The one tiny problem that we have is that there is a time limit. The reason is that we actually start the negotiation with the remote peer when you open a channel. We pause that process and the peer will timeout after ten minutes. Keep that in mind. I hope that won’t be a problem when using this.

Laolu - Ten minutes seems like a lot of time.

Especially if you can automate stuff it should be plenty of time.

Laolu - One cool thing automation wise is opening multiple channels in a single transaction. You can actually use a PSBT several times over and maybe open ten channels in a single transaction. That is pretty cool feature wise.

What is a PSBT?

What is a PSBT? Maybe not everyone knows. It is a BIP of course. It stands for partially signed Bitcoin transaction. It is a standard format to exchange information about a transaction that you want to create. It allows wallets to cooperate on the process of assembling and signing a transaction. Because the design space is quite big the creators of the BIP decided to create roles that you as a wallet, as software can take. There is the Creator, the Updater, the Signer, the Combiner to combine multiple PSBTs, Input Finalizer to assemble all the witnesses and finally the Transaction Extractor to extract the raw Bitcoin transaction. One part of it is an actual wire format Bitcoin raw transaction which is step by step being more complete. The other thing is that for each input and each output you can add additional information. It is a TLV like format that has a few standard data types. You can add the UTXO that is being spent, you can add the derivation path of an input, you can specify the sighash type, what scripts are being used. All these things aren’t really in a transaction itself, these are all additional information that the participants need to complete the transaction. The partial part is very important. Not only the partial signature but also the partial input and output list.

PSBT roles example

Let’s go through an example. Let’s say you have a hardware wallet that has all your funds. You have bitcoind that has only the xpub so only has public keys but it knows all the UTXOs. It knows the chain, it knows the fees and it knows all the derivation path but it has no keys. You want to use these to open a channel in lnd. That’s exactly what we used in our example use case. We’d go ahead and tell lnd I want to open a channel with a peer by using PSBT. It would then give you the instructions on what to do. It would say “Now go to bitcoind, tell it to run this command and then you go to your hardware wallet or whatever.” At the end you have a finalized PSBT, you give that back to lnd, it will extract a transaction and publish it. The result is an open channel directly from the hardware wallet. As Laolu already said this allows for multiple cool use cases. One of these is batching. You can open multiple channels. This is already partially implementing by the command line. You can specify a previous PSBT that you want to add your new channel output to. You can also automate this to open a whole bunch of channels. With that you can also spend directly from a multisig output. You could even take that loop out sweep and push it directly into a channel funding. I think it is really cool what Joost just explains about multi loop out. It allows for the merchant zero balance scenario where a merchant has a hardware wallet, it opens a channel from that hardware wallet, the closing transaction will also go to the hardware wallet, they receive money during the day. At the end of the day they loop everything out and in the evening they have zero hot balance on their lnd node. They can do that everyday so it is basically like emptying the cash register everyday. A robber or an attacker wouldn’t have any funds to get. That is pretty cool. It also allows for some privacy features. If you think about uniform output sizes and batching you get the idea.

Future PSBT use cases

We also, Laolu also have many plans on how to spin this even further. You could have a pair of lnd nodes. One is completely watch only. Everything that needs any kind of private key or signing is extracted away behind a RPC interface. This is the node that is publicly connected to the internet that talks to the other nodes. Then there is a second node that is hardened, firewalled, all the security measures that you can think of which is really not that connected to the internet and only has the hot keys and answers the signing request from the other node. It would add a small delay but we think it is possible to do this. You could actually be more careful about all your hot keys. Of course everything that touches the chain would be done through PSBT which also allows it to be much more integrate-able into other wallets. We also plan to implement more of the PSBT roles into lnd so the lnd wallet can do all of these functions. It can do the whole funding fee estimation, whatever in lnd.

Q&A

Q - All of the changes to the HTLCs and the sizes, is that partially an attempt to prevent some of the theoretical attacks where they try to DDOS all the nodes or is that just a basic optimization?

Laolu - The fees that we added in this new format make it easier to defend against that in future. If someone tried to DOS the node and make them force close they would be locked into that fee rate at that particular point. But now they can increase their fee rate to get into the block. It gives you a lot more control. Right now you have to guess ahead of time what the fee should be whenever you force close. Now you can low ball it now and use CPFP to bump it up progressively. Before you were locked in to that fee rate which is not good because the conditions on the mempool and the chain can change, congestion and things like that.

Q - If the fees get higher than the channel balance itself does it also help with that since you are locked in at a lower fee? Let’s say you have a channel for 30 dollars and the fees are 25 dollars?

Laolu - In this case you can have the fee be much lower, you don’t really care about it. You can go onchain later on. The other thing is that this is only for force close. Whenever you are forced to broadcast onchain meaning there is a HTLC that is about to timeout or the other party is offline. This is the emergency case and making sure in the emergency case you can bump the fee up to get into the chain in time. It should generally lower fees as well because you are no longer trying to guess the fee of the next block, you are guessing the fee to get into the mempool which is a lot less volatile and typically a lot lower.

Q - What is the current status of dual funding? I have seen some discussion on the mailing list about some anti-DOS aspects of it.

Laolu - It is something I am working on directly and Lisa from c-lightning too. I think they are worried about issues such as making sure that you can’t probe the balance of another user. Do repeated dual funding attempts to see all of their inputs. Also possibly making sure that if you are providing an input, the input is unspent… It seems like there are other issues as far as privacy and DOS holding it back right now. On the upside you can actually use stuff today to do similar things to dual funding. You can have two channels, I open one to you, you open one to myself. Obviously it is not as good because you have two outputs but that is a holdover until people figure out whatever they are comfortable with in terms of mitigations on the dual funding side. There is a PR open in the spec repo. I think c-lightning has another PR open themselves, maybe they are looking to get it into their next release. I’m not sure. You can simulate dual funding if you are cool with having two outputs versus one.

Q - Can I just clarify? You are suggesting the idea that you would dual fund in the sense that you would share inputs but you just have two separate channel outputs. Is that what you mean?

Laolu - Theoretically with what we have right now with PSBT you can have a single transaction open two channels in both directions. I open one to you, you open one to myself. Now we are “dual funded”. We have an initial starting state but obviously as time goes on maybe the channels get a little more unbalanced, maybe you need to do things like looping. Obviously the advantage of doing it the proper way is you have one UTXO in the chain versus two.

Q - What happened to the TLV shop?

Joost - It is not my ambition to be a shop owner. It was just a demonstration. The internet got bored pretty quickly so we took it down. There is a PR in the works at the moment that allows you to do itself. For the TLV shop I had to fork lnd to do the interactive keysend acceptance. Currently in lnd if you accept keysend payments you will always accept them. For something like TLV shop it is important to carefully inspect all the records, see if the order is correct, is the address correct, is the amount correct? Only then you settle the HTLC. Otherwise you want to cancel back because you may otherwise end up getting money and you are not able to send out your order. That PR is coming and I think that is a very important building block to build similar services yourself.

Q - On the mailing list Alex Bosworth that there are potentially breaking changes with the API in 0.10. I was wondering if you could speak to that.

Laolu - MPP can only be used in the router RPC server. This is not the main one we are using, it is another one. What happened is that there are some changes to the experimental one. If you are using the main RPC server everything is fine. If you are using the one that is a little more bleeding edge you may need to update your code. We envisioned that we would change it over time because we weren’t sure how MPP would look like interface wise. Basically the response has changed if you are using that new RPC server. If you are using the old one you are fine. We do want people to migrate to this new one because the new one is better and will be a lot more stable going forward.

Joost - The only thing to add there is the subservers have been experimental so far, they still are really. It is unclear to us how many people rely on this. The impact of this change is still to be seen. We are very interested to get feedback during the RC cycle to see what people think of this change.

Q - Is this change going to have an effect on the REST service as well?

Joost - There is no REST service yet for the router RPC so no.

Laolu - Once it is more grown up there will be a REST service. I’ll be doing more work on that to make sure everything is covered on that. It is a newer more bleeding edge thing but we will progressively make it more refined and more stable.

Joost: That is an important thing as well. If you want to do multipart payments you need to use the router RPC now. It is not enabled in the main RPC. It is an incentive to move over.

Oliver: And currently an incentive to use gRPC because there are other RPCs as you said not available on REST yet.

Q - A question about anchor outputs. Does that mean we don’t do fee updates anymore at all in anchor outputs?

Laolu - Right now we do still do fee updates. You do a fee update and you have a much more lax fee estimate. Rather than two or three blocks more like twenty blocks. We still do that shorter trip estimate. In the future we will probably have a mechanism where you do manual fee updates themselves or have a fee update ceiling in that channel. Right now all the capabilities are there.

Q - That would also be a cool security feature if you could say “I only make outgoing payments to this channel. There are no fee updates. I know I’m always on the latest channel state. It is harder for me to get breached.”

A - We could definitely add that in. That makes sense.